Your Perfect Assignment is Just a Click Away
We Write Custom Academic Papers 100% Original, Plagiarism Free, Customized to your instructions!

Classical and Operant Conditioning

Classical and Operant Conditioning

Have you ever wondered how you will control a class of students? There are various techniques in how teachers can “manipulate” students to act in a manner that is appropriate and conducive for learning. Behaviorism tells us that learning is observed through changes in behavior. When students respond to situations in their environment, this is when we can see the occurrence of learning unlike cognitivists that suggests learning is also an unobservable change that occurs within the brain. The use of behaviorism is prominent in a teacher’s classroom management plan as this is where the teacher wants to see an observable change in behavior.
Read Chapter 5 of our text to gain an understanding of Pavlov’s classical and Skinner’s operant conditioning.
In the first paragraph of your response, describe the difference between classical and operant conditioning. Reflect on your own educational experience. Describe examples of classical and operant conditioning that you have experienced. Also, describe outcomes of each.
In the next paragraph of your response, compare positive and negative reinforcement strategies. How is negative reinforcement different than punishment? Refer to the examples you shared in your first paragraph: explain whether they were instances of positive reinforcement, negative reinforcement, or punishment. Provide your reasoning and explain how they impacted your learning.
Chapter 5
LeFrançois, G. (2011). Psychology for teaching (11th ed.). San Diego, CA: Bridgepoint Education, Inc.
Chapter Five: Behaviorism and Social Cognitive Theory
He was apparently the victim of what psychologists call one-shot taste aversion learning. It’s a type of learning easily illustrated with animalslike rats. When rats are given something to eat and then exposed to a single dose of radiation, which makes them ill, they will then refuse toeat the food they ate just before the radiation. This is a special kind of learning that can be extremely important for survival. If we, and otheranimals, didn’t easily learn to avoid things that make us ill, many of us wouldn’t be here today: Too many of our ancestors would havecontinued to eat mushrooms of the kind I can easily find.
A Definition of Learning
The learning of taste aversions is a biologically based phenomenon, of little direct importance to the business of teaching. However, moregeneral forms of learning are absolutely central to the educational enterprise, which is really all about learning.
Learning, you see, is the acquisition of information and knowledge, of skills and habits, and of attitudes and beliefs. It always involves a changein one of these areas—a change that is brought about by the learner’s experiences. Accordingly, psychologists define learning as all relativelypermanent changes in potential for behavior that result from experience but are not due to fatigue, maturation, drugs, injury, or disease. (SeeFigure 5.1.)
Figure 5.1
Evidence of learning is found in actual or potential changes in behavior as aresult of experience. But learning itself is an invisible, internal neurologicalprocess.
Note that learning is found not only in actual but also in potential changes in behavior because not all changes involved in learning are obviousand observable. For example, in the case entitled “The Talking Marks,” there are some immediately apparent changes in the students’ actualbehavior—as, for example, when Tyler makes a pair of “talking marks” and places them appropriately, a behavior of which he was earlierincapable. There may also be other important changes that are not apparent but are still a fundamental part of learning.
Cases from the Classroom: The Talking Marks
The Place: Lynn Swann’s 2nd grade class
The Situation: A punctuation lesson on quotation marks
Ms. Swann: And what we have to do is put the talking marks around the words that come right out of Mr. Brown’s mouth. (Demonstratingwith a cartoon character who has just said, “Here’s my dog.”)
Tyler: Can I do it, Ms. Swann? Can I?
Ms. Swann: May I, Tyler. It’s may I. Yes you may and we’ll see if you can. (Ms. Swann erases the quotation marks. Tyler takes the green penand makes a pair of recognizable opening and closing quotation marks. The children have already practiced making these “talking marks.”)
Ms. Swann: Very good, Tyler. I see that you can do it.
Jenna: Can I do it too? Can I?
Ms. Swann: Weren’t you paying any attention at all, Jenna? It’s may! May, not can. No, you may not do it right now. We have to move alongbecause it’s going to be lunch time soon. (and the lesson continues . . . )
Learning is defined as changes inpotential for behavior. Hence, learning isnot always evident in actual performance.That Amin’s head is full of all those newFrench words will not become evidentuntil he speaks or writes them.
For example, there may be an unfortunate change in Jenna’s eagerness to participate in class activitiesfollowing Ms. Swann’s refusal to allow her to do so and because of the loud scolding she received forthe may I–can I grammatical error. This change in disposition—that is, in the person’s inclination to door not to do something—is also an example of learning. Changes in disposition have to do withmotivation, a topic discussed in Chapter 8. Motivational changes cannot always be observed but are noless real or important.
Learning often involves changes in capability—that is, changes in the skills or the knowledge required todo something. Like changes in disposition, changes in capability are not always observed directly. Forinstance, in Ms. Swann’s class, many other students will probably also learn to make quotation marksand to place them “around the words that come right out of Mr. Brown’s mouth.” But, like Jenna, mostwill not be given an opportunity to demonstrate this learning immediately. To determine whetherstudents’ dispositions or capabilities have changed following instruction, teachers need to give them anopportunity to engage in the relevant behavior—that is, to perform.
Performance refers to actual behavior—to a real-life demonstration of knowledge or capability. WhenLeonard recites a poem he has been asked to memorize, when Lenora writes a test, when William dunksthe basketball for his coach, when Jenna later puts the “talking marks” where they belong, they areperforming. That is, they are demonstrating the effects of learning through their actual performance.What’s important to note is that the changes in capabilities and dispositions that define learning will notbe evident until learners are placed in a situation requiring the
Pavlov’s work was important because it demonstrated that the processesof learning could be studied scientifically, and that the principles ofconditioning were applicable to humans as well. Why is it significant thatwe understand that, just as conditioning is possible, so is”unconditioning”?
The basic facts of classical conditioning, which, according to Bitterman (2006), have changed very little since Pavlov’s work, are this: A stimulusor situation that readily leads to a response can be paired repeatedly with a neutral stimulus (one that does not lead to a response) so thateventually the neutral stimulus will have been conditioned to bring about the response. Note that learning in classical conditioning is typicallyunconscious. That is, learners do not respond to the conditioned stimulus because they become aware of the relationship between it and anunconditioned stimulus.
Watson’s Environmentalism
According to J. B. Watson (1913, 1916), who was greatly influenced by the work of Pavlov, people are born with a limited number of reflexes—simple, unlearned behaviors. Learning, explained Watson, is just a matter of classical conditioning involving these reflexes. Hence, differencesamong people are entirely a function of their experiences. This point of view is referred to as environmentalism.
Watson’s view was extremely influential in the early development of psychology in the United States. His insistence on precision, rigor, andobjectivity was very much in line with the scientific spirit of the times—as was his rejection of popular but vague terms such as mind, feeling,and sensation (Berman & Lyons, 2007). The belief that what we become is a function of our experiences also presents a just and egalitarianview of humans. If what we become is truly a function of the experiences to which we are subjected, we are in fact born equal. Watsondeclared that any child can become a doctor or a judge. In fact, however, things are not quite that simple: Not everybody can become a doctoror a judge.
Instructional Implications of Pavlov’s and Watson’s Behaviorism
Classical conditioning, especially of emotional reactions, occurs in all schools, virtually at all times, regardless of the other kinds of learning goingon at the same time. And it is largely through these unconscious processes that students come to dislike schools, subjects, teachers, and relatedstimuli—or to like them.
To illustrate, a school subject may be considered a neutral stimulus that evokes little emotional response when first encountered. But distinctivestimuli that accompany the subject may be associated with pleasant responses (a comfortable desk, a friendly teacher) or with more negativereactions (a cold, hard desk; a cold, hard teacher with a grating voice and squeaking chalk). After repeated pairings of the subject with adistinctive unconditioned stimulus, the emotions (attitudes) associated with the unconditioned stimulus may become classically conditioned tothe subject (see Figure 5.2).
Figure 5.2
In classical conditioning, an initially neutral stimulus (NS) is paired with anunconditioned, fear-producing stimulus (US) so that the subject is eventuallyconditioned to fear the previously neutral stimulus. Fear is now a conditionedresponse (CR) to a conditioned stimulus (CS).
In the case “Of Pig Grunting and Flinching”, there are clear examples of classical conditioning in the school. Most obviously, Robert has beenconditioned to flinch when Mrs. Grundy makes noises. Less obvious but perhaps even more important, he has probably also acquired a numberof negative emotional responses associated not only with Mrs. Grundy’s noises but also with her, with the classroom, with the subject sheteachers—perhaps even with school in general.
Cases from the Classroom: Of Pig Grunting and Flinching
The Time: 1848
The Place: Mrs. Evelyn Grundy’s classroom in Raleigh, North Carolina
The Situation: 6-year-old Robert has been “Misbehaving to Girls and Telling Lyes”
In this Raleigh school system in 1848, the prescribed punishment for Misbehaving to Girls is 10 lashes, and for Telling Lyes it’s 7 lashes for atotal of 17 lashes.* Mrs. Grundy administers the punishment herself. And every time she raises the cane to strike Robert, the effort makes hersqueal hoarsely, a little like pig grunting. By the tenth lash, Robert has begun to flinch just before the cane hits. He cries out quite loudlywhen it lands.
Later that day, when Mrs. Grundy is passing out the spellers, she turns her back to Edward and his ruler-propelled spitball catches her smackbehind the left ear and she squeals loudly. Robert flinches.
*By our standards, the punishment sounds extreme and barbaric. But in 1848, it was the prescribed punishment in this Raleigh school (see Coon, 1915).
The clearest and most important instructional implications of classical conditioning include the following:
Teachers need to do whatever they can to maximize the number, the distinctiveness, and the potency of pleasant unconditioned stimuli intheir classrooms.
Teachers should try to minimize the unpleasant aspects of being a student, thus reducing the number and potency of negativeunconditioned stimuli in their classrooms.
Teachers need to know what is being paired with what in their classrooms.
The old adage that learning should be fun is more than a schoolchild’s frivolous plea; it follows directly from classical conditioning theory. Ateacher who makes students smile and laugh while she has them repeat the 6 times table may, because of the variety of stimuli and responsesbeing paired, succeed in teaching students (1) how to smile and laugh—a worthwhile undertaking in its own right, (2) to associate stimuli suchas 6 X 7 with responses such as “42”—a valuable piece of information, and (3) to like arithmetic—and the teacher, the school, the smell ofchalk, the feel of a book’s pages, and on and on.
What does a teacher who makes students suffer grimly through their multiplication tables teach?
Skinner’s Operant Conditioning
By definition, behaviorists are concerned with behavior. They define learning as changes in behavior and look to the environment forexplanations of these changes. Their theories are associative; they deal with associations that are formed among stimuli and responses. And,typically, they explain learning on the basis of contiguity (simultaneity of stimulus and response events) or in terms of the effects of behavior(reinforcement and punishment). Pavlov and Watson are contiguity theorists; Thorndike is a reinforcement theorist, and so is B. F. Skinner, oneof the most influential psychologists of the 20th century and the man behind the theory of operant conditioning.
Respondents and Operants
There are two kinds of behavior, explained Skinner. Elicited responses are all the many responses that are caused by stimuli and can beclassically conditioned (like sneezing; blinking; being angry, afraid, or excited). These are also called respondents because they occur in responseto a stimulus. They are largely automatic, involuntary, and almost inevitable responses to specific situations.
Emitted responses are a much larger and more important class of behaviors that are not elicited by any known stimuli but are simply emitted.Skinner called these behaviors operants because, in a sense, they are operations performed by the organism. Driving a car, surfing the Internet,singing, reading a book, and kissing a baby are generally operants. Their common characteristics are that they are deliberate and intentional.And they are subject to the laws of operant conditioning. (See Table 5.1)
Table 5.1: Classical and Operant Conditioning
Classical (Pavlovian)Operant (Skinnerian)
Deals with respondents, which are elicited by stimuli and appearinvoluntaryDeals with operants, which are emitted as purposeful(instrumental) acts
Reactions to the environmentActions upon the environment
Type S conditioning (S for stimuli)Type R conditioning (R for reinforcement)
What Is Operant Conditioning?
The clearest illustration of operant conditioning involves a typical Skinnerian experiment in which a rat is placed in a Skinner box, a small,controlled environment (see Figure 5.4). The Skinner box is constructed to make certain responses highly probable and to allow theexperimenter to measure these responses and to punish or reward them. For a typical experiment, the box might contain a lever, a light, anelectric grid on the floor, and a food tray, arranged so that when the rat depresses the lever, the light goes on and a food pellet is released intothe tray. Most rats will quickly learn to depress the lever if rewarded. And they can also be trained to avoid the lever if depressing it activates amild electric current in the floor grid, or to depress it if doing so turns off a current that is otherwise constant.
Figure 5.4
A Skinner box. Operant conditioning is clearly demonstrated by Skinner’sexperiments observing a rat’s (e) interactions with a light (a), food tray (b), lever(c), and electric grid (d).
From G. R. Lefrançois, Theories of Human Learning: What the Old Woman Said (5thed.). Copyright 2006 Wadsworth.
Most of the basic elements of Skinner’s theory are evident in this situation. The rat’s depressing the lever is an operant—a behavior that issimply emitted rather than being elicited by a specific stimulus. The food pellets are reinforcement: they increase the probability that the rat willdepress the lever.
In general terms, operant conditioning increases the probability that a response will occur again. Furthermore, the reward, together withwhatever discriminated stimuli (SD) are present at the time of reinforcement, are stimuli that, after learning, may bring about the operant. Forexample, the rat’s view (and smell) of the inside of the Skinner box may eventually serve as stimuli for lever-pressing behavior. But, cautionsSkinner, these are not stimuli in the sense that a puff of air in the eye is a stimulus that elicits a blink. Rather, these discriminated stimuli aresignals that a certain behavior may lead to reinforcement. (See Figure 5.5 for a model of operant learning in the classroom.)
Figure 5.5
In operant conditioning, unlike classical conditioning, the original response isemitted rather than elicited by a stimulus. In this example, a variety of off-taskand on-task behaviors are emitted. Reinforcement leads to the more frequentoccurrence of on-task behaviors.
The causes of behavior, Skinner insisted, are outside the organism; they have to do with the consequences of actions. Thus, his science ofbehavior seeks to discover how consequences affect behavior (Skinner, 1969; see Figure 5.6).
Figure 5.6
The variables Skinner studied.
Reinforcement is the effect of any stimulus that increases the probability that a response will occur. There are two broad classes of reinforcers:primary and generalized. A primary reinforcer is a stimulus that the organism does not have to learn is reinforcing. Primary reinforcers areordinarily related to unlearned needs such as the need for food, drink, or sex. Stimuli that satisfy these needs tend to be highly reinforcing formost organisms.
A generalized reinforcer is a previously neutral stimulus that, through repeated pairings with other reinforcers in various situations, has becomereinforcing for many behaviors. In one sense, five dollars is only a piece of paper; that’s all it is to a very young child. But to an older child or anadult for whom dollars have been paired with many reinforcers, five dollars—or better yet, a whole fistful of five dollar bills—is an extremelypowerful generalized reinforcer. And so are prestige, fame, power, and high grades.
A stimulus is a positive reinforcer if it increases the probability of a response occurring when it is added to a situation. A negative reinforcerhas the same effect when it is removed from the situation. Negative reinforcers tend to be aversive stimuli (unpleasant outcomes such as anelectric shock or detention). Positive reinforcers tend to be positive stimuli (pleasant outcomes such as money, food, or tokens).
In the Skinner box example, food pellets are positive reinforcers—as might be the light if it’s paired with food. However, if a mild current wereturned on in the electric grid that runs through the floor of the box, and if this current were turned off only when the rat depressed the lever,turning off the current would be an example of an aversive stimulus serving as a negative reinforcer.
In summary, there are two types of reinforcement: One involves presenting a pleasant stimulus (positive reinforcement; reward); the otherinvolves removing an aversive stimulus (negative reinforcement; relief). Similarly, there are two types of punishment: removing a pleasantstimulus (penalty; often termed removal punishment); and presenting an aversive stimulus (castigation; sometimes called presentationpunishment).
Keep in mind that both positive and aversive stimuli can be used for either reinforcement or punishment. As Figure 5.7 illustrates, this dependson whether stimuli are added to or taken away from the situation following a behavior. Also keep in mind that whether a stimulus is reinforcingor not depends entirely on its effect on behavior. (See Figure 5.8 for classroom examples of operant conditioning.)
Figure 5.7
Reinforcement and punishment.
Figure 5.8
The first two classroom examples of operant conditioning (positive and negative reinforcement) lead to an increase inthe likelihood of the response. The last two examples (both forms of punishment) lead to a decrease in the likelihoodof the response. Teachers may also inadvertently reinforce maladaptive behaviors (second example). Note that in reallife, the implications of each of these consequences may not be so simple and straightforward.
Aversive and Positive Control
Note that negative reinforcement and punishment describe two very different situations. The two are often confused because each can involveaversive stimuli. But each has very different effects on behavior. Specifically, punishment is meant to bring about a reduction in behavior;whereas, negative reinforcement, like positive reinforcement, increases the probability that a response will occur. Thus, a child can beencouraged to speak politely to teachers by being smiled at for saying “please” and “thank you” (positive reinforcement). Another child can bebeaten with a cane (or threatened therewith) when “please” and “thank-you” are forgotten (punishment)—with the clear understanding that thecane will be put away only when behavior conforms to the teacher’s standards of politeness (negative reinforcement). In the end, both childrenmay be wonderfully polite. But which child, do you suppose, will like teachers and schools more? There is surely an extremely important lessonhere for teachers.
Strange as it might seem, the use of negative reinforcement as a means of control is highly prevalent in today’s schools, homes, and churches,as is the use of punishment. These methods of aversive control (in contrast to positive control) are evident in the issuance of low grades andverbal rebukes, threats of punishment, detention, and the unpleasant fates that most major religions promise transgressors. These methods areevident as well in our legal and judicial systems, which are extraordinarily punitive rather than rewarding.
Types of Reinforcement Schedules
The variables Skinner was most interested in investigating were type of reinforcement and reinforcement schedule (how reinforcement ispresented). He wanted to know how these affect behavior. He looked at how rapidly learning occurs, the rate of responding, and how longbehavior persists in the absence of reinforcement (Figure 5.6).
One of Skinner’s important early conclusions was that even a very small reward will lead to effective learning and will maintain behavior over along period. You don’t have to feed a dog an entire steak to teach it a sequence of simple tricks; a tiny morsel will do just as well. Besides, it’sclear that too much reward (satiation) may lead to a cessation of behavior. After several steaks, the dog might well say, “Enough, thank you, I’m—belch—going to curl up and sleep now.”
How reinforcement is administered is referred to as the schedule of reinforcement. Schedules always involve either continuous reinforcement,where every correct response is reinforced, or intermittent reinforcement (also called partial reinforcement), where only some correct responsesare reinforced—or some combination of the two.
Intermittent schedules of reinforcement might be based on a proportion of responses (a ratio schedule), or on the passage of time (an intervalschedule). For example, a ratio schedule might reinforce one out of five correct responses; an interval schedule might reinforce one correctresponse for every 15-second lapse. In either case, there are two more options: Reinforcement can be given in a predetermined fashion (fixedschedule) or in a more haphazard manner (random or variable schedule). And, of course, different schedules might be used at the same time inwhat are termed concurrent schedules.
There is, also, one additional choice: a superstitious schedule. A superstitious schedule provides regular reinforcement no matter what thelearner is doing. It’s a fixed-interval schedule without the requirement that there be a correct response before reinforcement occurs. Skinner(1948) once left six pigeons overnight on a superstitious schedule (they received reinforcement at regular intervals no matter what they did). Hefound that by morning one bird had learned to turn clockwise just before each reinforcement, another pointed its head toward the corner, andseveral had learned to sway back and forth.
Skinner suggests that we too learn superstitious behaviors as a result of reinforcement that occurs independently of what we do. For example,some of us are very careful to always put on our red and yellow underwear whenever the home team plays. After all, they won that one timewe wore those things. And they lost that time we forgot. Figure 5.9 summarizes Skinner’s schedules of reinforcement.
Figure 5.9
Schedules of reinforcement. Each type of schedule tends to generate a predictable pattern of responding.
Effects of Various Schedules
One of the things Skinner was interested in discovering was the relationship between various schedules of reinforcement and rate of learning,extinction rate, and response rate. Some of these results have important implications for teaching.
In the early stages of learning, it appears that continuous reinforcement leads to the highest rate of learning. When learning simple responsessuch as pressing a lever, the rat might become confused and would almost certainly learn much more slowly if only some of its initial correctresponses were reinforced.
Interestingly, although continuous reinforcement often leads to more rapid learning, the extinction rate for behavior that has been continuouslyreinforced is considerably faster than for behavior that has been reinforced intermittently.
Among animal subjects, rate of responding is clearly a function of the schedule used. Pigeons and rats, for example, often behave as thoughthey had developed expectations about reward. A pigeon that has been taught to peck a disk and is reinforced for the first peck after a lapse of15 seconds (fixed interval) often stops pecking immediately after being reinforced and starts again just before the end of the 15-second interval.If, on the other hand, the pigeon is reinforced on a random ratio basis, its response rate will be uniformly high and constant, often as high as2,000 or more pecks per hour. (See Figure 5.10.)
Figure 5.10
Idealized graphs showing pigeon pecking with two reinforcement schedules.
The Effects of Schedules on Humans
So! One can reinforce the behavior of rats and pigeons in a variety of clever ways and note a number of consistent effects this will have on theirridiculously simple behaviors. From this, many graduate dissertations and yards of published research can be derived for the erudition of thescholars and the amazement of the people. But what of human beings? How are they affected by schedules of reinforcement?
The simple answer is, in much the same way as experimental animals. Kollins, Newland, & Critchfield (1997) reviewed 25 studies that looked atthis question. They conclude that humans seem to respond to schedules of reinforcement much as animals do. In the early stages of learning,we perform better under continuous schedules, but our responses are more durable and more predictable if we are later reinforcedintermittently. That the attention-seeking behaviors of young children are so highly persistent may well be precisely because these behaviors areoften reinforced intermittently.
Concurrent Schedules of Reinforcement
Clearly, however, human behavior is seldom as simple as might be the bar-pressing behavior of a rat or the key pecking of a pigeon. Neither therat nor the pigeon has a lot of choices in its highly controlled environment: to press or not to press; to peck or not to peck. . . . But you, on theother hand, might have a near-overwhelming array of choices: To go to a movie or not to go; to study or not to study; to call this friend or thatfriend or the other friend; to text-message a parent; to update Facebook; to twitter your current thoughts for the amazement of your friends;and on and on. To each of these choices is linked the possibility of reinforcement. And each might be associated with very different schedules ofreinforcement—a situation that defines concurrent schedules of reinforcement.
In studies of concurrent schedules, the organism can choose among two or more different behaviors, each of which is linked to a differentschedule of reinforcement. For example, a pigeon might be placed in a situation where pecking disk A is linked to a variable ratio schedule andpecking disk B is linked to a variable interval schedule. Studies of pigeons under these circumstances indicate that they typically select whichdisk to peck and adjust their rate of pecking in clearly predictable ways that tend to maximize reward. A pigeon is not totally stupid!
Not surprisingly, studies with human subjects lead to much the same conclusion: Our behaviors in experiments where responses are tied todifferent schedules of reinforcement tend to be directed toward maximizing reinforcement (Silberberg et al., 2008).
Shaping Through Operant Conditioning
“We cannot teach cows to retrieve a stick,” Guthrie informs us.Fetching sticks is simply not something that is of any interest tothis cow. The point is that the things we teach our children shouldbe things that they both can and want to do.
It is relatively simple to train a rat to press a lever, a pigeon to peck a disk, ora 2-year-old to say “Wazoo.” Why? Because these are some of the things thatrats, pigeons, and children do. But as Guthrie (1935) observes, “We cannotteach cows to retrieve a stick because this is one of the things that cows donot do” (1935, p. 45).
But maybe Guthrie is wrong: It just might be possible to train a cow toretrieve a stick. The psychologist charged with that task could stand there,leaning on the fence, day after day, watching for the behavior in question toappear. And when the cow finally decided in her cowlike way to pick up thestick, it would be a simple matter to reinforce her—say, with a nice new baleof timothy hay—thus, increasing the probability that the behavior would occuragain. Unfortunately, both the psychologist and the cow would likely die of oldage before the desired operant appeared.
Shaping is a much better way of teaching animals complex behaviors. Itinvolves reinforcing the animal for every response that brings it slightly closerto the desired behavior. For example, to teach the cow to pick up a stick, theexperimenter might initially reinforce the cow every time it turned toward thestick. Later, once the cow had learned to turn toward the stick, it would nolonger be reinforced until it moved slightly closer to it. And if thereinforcements were accompanied by a distinctive stimulus such as the soundof a cowbell (a discriminated stimulus), eventually the cow might walk directly to the stick every time it heard the bell. And, following thesystematic reinforcement of behaviors successively closer to the desired operant, in the end the cow might have learned to pick up and retrievethe stick, placing it gently in the psychologist’s hand, which would surely have amazed and confounded my grandmother!
Generalization and Discrimination
It isn’t possible for schools and teachers to give students experience with all situations in which a specific learned behavior will or will not beappropriate. Yet one of the most important tasks of schools is to prepare learners to respond appropriately in new situations. And reassuringlyoften, children do respond appropriately when faced with completely new situations. They learn to discriminate between situations where aparticular behavior is appropriate and others where it isn’t—termed discrimination learning. And they learn when to apply a behavior indifferent situations where appropriate—termed generalization.
As an example, many children learn very early in life that their mother will pay attention if they cry. And they soon learn to generalize thisbehavior from specific situations where they have obtained their mother’s attention to new situations where they desire her attention. Andoften, a wise mother can bring about discrimination learning simply by not paying attention to her child in those situations in which she doesn’twant to be disturbed—like when she’s on the phone.
Instructional Implications of Skinner’s Operant Conditioning
The principles of operant learning are enormously relevant for teaching. A classroom is in many ways like a gigantic Skinner box. Like a Skinnerbox, it is engineered so that certain responses are more probable than others. For example, it is easier to sit at a desk than to lie in one,

Our Service Charter

1. Professional & Expert Writers: Elite Writers only hires the best. Our writers are specially selected and recruited, after which they undergo further training to perfect their skills for specialization purposes. Moreover, our writers are holders of masters and Ph.D. degrees. They have impressive academic records, besides being native English speakers.

2. Top Quality Papers: Our customers are always guaranteed of papers that exceed their expectations. All our writers have +5 years of experience. This implies that all papers are written by individuals who are experts in their fields. In addition, the quality team reviews all the papers before sending them to the customers.

3. Plagiarism-Free Papers: All papers provided by Elite Writers are written from scratch. Appropriate referencing and citation of key information are followed. Plagiarism checkers are used by the Quality assurance team and our editors just to double-check that there are no instances of plagiarism.

4. Timely Delivery: Time wasted is equivalent to a failed dedication and commitment. Elite Writers is known for timely delivery of any pending customer orders. Customers are well informed of the progress of their papers to ensure they keep track of what the writer is providing before the final draft is sent for grading.

5. Affordable Prices: Our prices are fairly structured to fit in all groups. Any customer willing to place their assignments with us can do so at very affordable prices. In addition, our customers enjoy regular discounts and bonuses.

6. 24/7 Customer Support: At Elite Writers, we have put in place a team of experts who answer to all customer inquiries promptly. The best part is the ever-availability of the team. Customers can make inquiries anytime.