Measuring the Reinforcing Strength of Abused Drugs

  1. Jack Bergman and
  2. Carol A. Paronis
  1. McLean Hospital/Harvard Medical School, 115 Mill St. Belmont, MA 02478

Abstract

Animal models for human diseases are highly valued for their utility in developing new therapies. Animals have long provided suitable platforms for the development of innovative surgical procedures and for the study of disease states that are relatively easy to produce in otherwise healthy animals, such as diabetes or hypertension. Increasingly, new strains of animals susceptible to common human illnesses are being introduced into medical research, promising new inroads into the treatment of a variety of organic disorders. Despite these advances in model development, psychiatric disorders, by and large, remain among the hardest to induce experimentally, and the search for reasonable animal procedures to study diseases of the mind is an ongoing challenge for experimental biologists. An exception to this limitation, however, comes in the study of drug abuse. Major developments in this area of research over the last several decades have steadily advanced our ability to identify pharmacological, genetic, and environmental determinants that contribute to the development of drug dependence and addictive behavior.

Introduction

From a pharmacological view, drug abuse–related disorders represent perhaps unique medical concerns in that the drug is viewed not as a therapeutic agent but, instead, as the root cause of the disorder. Empirically, it is relatively easy to study some aspects of drug dependence, most notably biochemical or physiological changes that develop consequent to repeated or continuous exposure to drugs. It is a more difficult matter to study behavioral aspects of drug dependence, which may not be easily isolated or concretized. For example, it is well accepted that reinforcing effects of drugs contribute greatly to their abuse; that is, drug intake can produce biological changes that promotes further intake, leading eventually to abuse. However, this outcome is not invariable. Some people who are exposed to nicotine through cigarettes become heavy smokers, others smoke only situationally, and still others never undertake the habit. The determinants of which will be the prevailing response to the “reinforcing” effects of nicotine or of other drugs remain complex and poorly understood.

Further complicating matters, it is not enough to simply consider the reinforcing effects of a drug itself. Stimuli associated with drug use can become conditioned to the effects of the drug, powerfully enough at times to produce the same biological responses as the drug itself. In a compelling demonstration of this phenomenon, for example, O’Brien showed several decades ago that sham injections could provoke opiate-like effects in heroin addicts (1). More recently, it has become commonly accepted that relapse to drug-taking behavior in abstinent individuals can be spurred by stimuli previously associated with the drug, such as friends or the sight of drugs or drug paraphernalia (2). Observations such as these indicate that, at the least, models with which to study drug abuse must address both the pharmacology of the addictive drug and contextual or environmental elements that dictate addictive behavior.

The Development of Self-Administration Procedures in Laboratory Animals

The emergence of behavioral pharmacology in the last half of the twentieth century offered the opportunity to formally analyze the effects of drugs in different behavioral contexts. Critical to our current understanding of drug dependence and addictive behavior has been the use of operant procedures to clearly demonstrate that drugs of abuse were reinforcing, i.e., that they could maintain “self-administration” in laboratory animals (see Box 1). Prior to the use of self-administration procedures to demonstrate reinforcing effects of drugs in laboratory animals, the property of addiction was thought to be a uniquely human dilemma. It was argued to reside primarily in the individual, beginning as a manifestation of “moral weakness” that led to and was ultimately driven by physical dependence [see (3)]. Indeed, the very first drug self-administration studies in animals also were based upon such conceptions and used animals that were made physically dependent on morphine (46). In short order, however, Deneau demonstrated convincingly that underlying drug dependence was not necessary for intravenously administered drugs to serve as reinforcers. In a landmark study, he showed that drugs from a variety of pharmacological classes could engender self-administration behavior in non-dependent—and, presumably, “morally neutral”—rhesus monkeys (7). These data were among the first to forward the proposition that reinforcing effects were a pharmacological property of some behaviorally active drugs. This idea, revolutionary at the time, fit well with emerging ideas in biological psychiatry and now is a commonly accepted tenet of drug abuse research.

Box 1.

Vocabulary

It has been said that the study of behavior suffers from our familiarity with ourselves and from problems of vocabulary (63, 64). Most English speakers will recognize the words “reinforcer” and “response,” yet they invariably imbue these words with different shades of meaning based on their own experience. However, an analytic approach to behavior requires that words retain the same definition across different users and different experimental situations. To aid in the reading of this paper, we provide a glossary, drawn largely from Catania (64), of some of the terms we use, and the meanings attached to them when used to describe operant behavior.

Choice – the allocation of responding to one of two or more alternative (and incompatible) schedules of reinforcement.

Concurrent schedules – an arrangement under which more than one schedule of reinforcement are simultaneously in effect. Concurrent schedules of reinforcement may or may not be formally identical.

Conditioning – process by which the function of a stimulus event is altered through repeated pairing with other stimuli. For example, a previously neutral tone that is repeatedly paired with food delivery can become a conditioned stimulus that provokes the same response as food delivery itself.

Preference – the probability of a response under one of two or more alternative schedules of reinforcement. Preference is derived from the distribution of responding under the different schedules and may provide one index of the relative effectiveness of different reinforcers.

Progressive ratio – a schedule under which the delivery of reinforcement increases the ratio size for the next reinforcer according to a pre-set progression. The breakpoint refers to the last completed ratio size under a progressive ratio schedule; it marks the transition from responding to non-responding and is used as an economic index of reinforcing value.

Punisher – a stimulus event that decreases the probability of responding.

Reinforcer – a stimulus event that increases the probability of responding.

Note that the definition of punisher or reinforcer is dictated by changes in the probability of behavior rather than by inherent characteristics of the stimulus event.

Response – a class of behavior the occurrence of which can be objectively defined and recorded (e.g., the depression of a lever with a given force). In operant conditioning, the frequency of a response, or responding, can be controlled by the schedule of reinforcement.

Response rate – the frequency of responding calculated by dividing number of responses by the time during which responses are recorded.

Response cost – the number of responses required for delivery of a reinforcer.

Schedule of reinforcement – the arrangement of contingencies between responding and the delivery of reinforcement. Commonly used schedules arrange for the delivery of a reinforcing stimulus event after the last of a specified number of responses (ratio schedule) or after the first response emitted after a specified passage of time (interval schedule).

Schedule control –the ability of different schedule contingencies to direct unique rates and patterns of operant responding.

Second order schedule – a contingency arrangement under which a series of responses under a schedule of conditioned reinforcement is treated as a unit response under a second schedule that is simultaneously in effect. For example, under a second-order fixed-ratio 10 schedule of food delivery with fixed-ratio 5 units, every fifth response produces a conditioned reinforcer (e.g., a visual stimulus) and the completion of the tenth fixed-ratio 5 unit produces the conditioned reinforcer accompanied by food delivery. Second order schedules can be used to generate high rates of responding over long periods of time.

Self-administration – a contingency arrangement under which responding is controlled by the delivery of a unit dose of drug.

Stimulus event – any occurrence that can be measured on one or more dimensions (e.g., a tone, a light); often loosely referred to as a “stimulus.” Stimulus events can serve different behavioral functions (e.g., see reinforcer, punisher).

Timeout – a period during which scheduled stimuli are extinguished and responding has no scheduled consequences.

Unit dose – in self-administration studies, the dose of drug available for delivery as a reinforcer.

Early self-administration studies set the stage for further research on the reinforcing effects of drugs, either to elaborate their determinants or, in a more practical vein, to assess their abuse liability. Operant procedures were especially suitable for quantifying rates and patterns of behavior, and numerous studies documented the responsivity of drug-maintained behavior to parametric manipulation of schedule contingencies [see Box;(8)]. From a pharmacological perspective, the reinforcing effects of drugs were shown to be receptor-mediated phenomena: they were dose-related and could be antagonized by selective receptor blockers. The pharmacological nature of the reinforcing effects of drugs could be graphed as a dose-response function that could be described in terms of its slope and position. Moreover, antagonism of those reinforcing effects, like antagonism of other drug effects, could be expressed quantitatively in terms of the horizontal or vertical displacement of the dose-response function.

From a behavioral perspective, the reinforcing effects of drugs were shown to be formally similar to those of other events that could maintain behavior, such as food or water delivery. Interestingly, rates and patterns of self-administration behavior appeared to be more closely governed by the schedule of reinforcement or other contextual contingencies than by the type of reinforcer. One value of such findings was that they suggested that drug self-administration behavior could be analyzed in much the same way as other types of reinforced behavior. These advances in analyzing the reinforcing effects of drugs in laboratory animals have been invaluable in further clarifying controlling factors in the behavior of drug-addicted individuals and have led to the contemporary view that drugs and other reinforcers act partly through common brain mechanisms involved in motivated behavior.

Reinforcing Strength

Notwithstanding these advances in our understanding of reinforcing effects of drugs, the difficulty in measuring a drug’s reinforcing strength per se is not always well appreciated. Reinforcing strength has been discussed previously in terms of effectiveness or “efficacy” (although it is not formally related to pharmacological “efficacy”) and refers to the likelihood that a drug will serve as a reinforcer under varying conditions (9). Thus, a weak reinforcer might be self-administered only when response requirements are low or there are no other behavioral options, whereas a strong reinforcer might be self-administered even under more stringent response requirements or in the presence of other usually attractive reinforcers. It is easy to see how the reinforcing strength of a drug might be directly related to its abuse liability; however, measuring reinforcing strength turns out to be no simple matter.

Most laboratory procedures for studying drug self-administration involve subjects, usually rats or monkeys, that respond (e.g., press a lever) under a simple operant schedule of reinforcement. The most commonly used schedule, a “fixed-ratio” schedule (FR), specifies that the completion of a fixed number of responses triggers the automated delivery of a fixed dose of drug. Using such simple schedules of responding, response rates or numbers of injections per session are the dependent variables that serve as obvious indices of reinforcement. Based on the simple assumption that the reinforcing effects of drugs, like other pharmacological endpoints, are directly related to dose, one might presume that response rate or intake will monotonically increase as a function of dose; however, this turns out not to be the case. Instead, the behaviorally disruptive effects of a self-administered drug, especially at high unit doses, interfere with behavior under simple schedules of reinforcement, resulting in decreases in measures of either response rate or intake. Thus, the reinforcing effects of drugs under simple schedules (e.g. fixed-ratio) most often are characterized by an inverted U–shaped dose–response function. This type of function relating dose and effect is not often seen in pharmacology but accurately reflects the commingling of reinforcing and other behaviorally disruptive effects of drugs in the whole organism (see Figure 1). The quantitatively similar effects of low and high doses of reinforcing drugs under these procedures are especially difficult to reconcile with the notion that reinforcing strength can be directly related to response rate or intake.

  Figure 1.
View larger version:
    Figure 1.

    Bimodality of function in self-administration studies. In this graph, the dose per injection is shown along the abscissa. The number of injections that are self-administered (reinforcers) is shown along the ordinate. Note that low and high unit doses of cocaine and heroin result in few reinforcer deliveries, whereas intermediate doses of either drug result in the greatest self-administration. As discussed in text, this type of dose-response relationship is not easily interpreted in terms of reinforcing strength.

    Behavioral pharmacologists have employed different strategies to address this problem. For example, one approach has been to space drug injections to minimize their disruptive effects. This can be done by scheduling long timeout periods after injections to allow behaviorally disruptive effects of the drug to dissipate before the next opportunity to self-administer the drug. This modification of simple fixed-ratio schedule procedures has been used successfully to evaluate reinforcing effects of a variety of drugs including stimulants, opioids, barbiturates, and benzodiazepines (1012). Another way to space drug injections is to use a more complex behavioral procedure called a second-order schedule. Responding under a second-order schedule depends upon association of the drug injection with a non-drug stimulus, usually colored lights. The non-drug stimulus takes on the characteristics of a reinforcer, and becomes a “conditioned reinforcer.” After conditioning, high rates of behavior can be maintained by the conditioned reinforcer throughout the session, with only infrequent pairings with drug injections—sometimes as few as 1 or 2 pairings per session. For example, nicotine historically has been a difficult drug to evaluate under simple schedules of reinforcement but, under second-order schedules, it is a highly effective reinforcer and can maintain rates and patterns of behavior comparable to those maintained by cocaine (13).

    The approach of spacing drug injections to minimize disruptive effects has had obvious success. Under procedures that employ long timeouts or second-order schedules, the function relating behavior to unit dose is broadened, and the downward segment of the inverted U-shaped function may become less evident. However, these approaches also have limitations for evaluating the reinforcing strength of a drug. When long timeouts are used to space self-administered injections, the amount of behavior generated by the reinforcing effects of a drug is arbitrarily restricted, reducing the resolution of the procedure for evaluating reinforcing strength. For procedures that employ second-order schedules, rates and patterns of responding are greatly determined by the reinforcing effects of the non-drug stimuli; it is therefore difficult to separate the reinforcing strength of the conditioned reinforcer from that of the drug injection per se.

    Another approach to evaluating reinforcing strength borrows from early work by Hodos in which the fixed-ratio requirement, but not the magnitude of the reinforcer, is progressively increased throughout the behavioral session (14). For a subject that presses a lever for food under a fixed-ratio schedule, for example, delivery of the first food pellet may require a single response, whereas delivery of the tenth pellet later in the session may require 100 responses. The dependent variable under this procedure is the break point, that is, the number or size of the final completed response requirement. There are many variants to this type of procedure in self-administration research, but an economic presumption underlies all variants: behavioral output under a progressive ratio schedule is directly related to the value, or strength, of the reinforcer. This approach has several attractive features. First, the relationship between dose of self-administered drug and break point value can be monotonic, usually over a wide range of doses, which makes evaluations of antagonism or tolerance more pharmacologically straightforward than in more conventional self-administration procedures. Second, progressive ratio procedures can be used across pharmacological classes, and orderly data can be obtained for drugs within a given class. Third, the break point value itself offers an intuitive approach to quantifying reinforcing strength. Behavioral economics, a vigorous area of investigation into the reinforcing strength of self-administered drugs, has used a similar conceptualization of reinforcement processes to develop sophisticated mathematical analyses of the reinforcing effects of self-administered drugs (1517). Notwithstanding these attractions, a number of procedural elements (e.g., starting ratio values, size and frequency of ratio increment, inherent relationships between response latency and ratio size) influence break point values and can make it deceptively difficult to evaluate the results of studies using progressive-ratio procedures. Also, as with simple fixed-ratio schedules, the behaviorally disruptive effects of the self-administered drug can affect response rates, which may impact upon break point value determinations. One indication of the difficulty with progressive ratio procedures is that opioid drugs such as heroin rarely, if ever, maintain break point values comparable to those obtained with psychomotor stimulants such as cocaine (18, 65).

    In each of the procedures described above, subjects are trained to respond on a single lever or other manipulandum for response-contingent drug delivery, and efforts to quantify reinforcing strength necessarily must be evaluated in terms of response rate or intake (Box 2). This approach emphasizes the pharmacological contribution to reinforcing strength; however, from the very first studies by Spragg, it was clear that the reinforcing effects of a drug might be best evaluated when other options for reinforcement also are available (5). In those experiments, Spragg made chimpanzees morphine-dependent, and then presented them with an opportunity to select one of two boxes, one containing a banana, the other a morphine-loaded syringe. These studies are frequently identified as a milestone, clearly demonstrating that drug injections could be reinforcing in animals. What often is overlooked, however, is that the chimpanzees did not always select the syringe-containing box. In Spragg’s experiments, he found that box selection was predicted by the context in which it was offered—when the subjects were morphine-abstinent they selected the syringe, but when they were food-deprived they selected the banana. Thus, the very first study of drug self-administration in animals observed that whether or not a drug would serve as a reinforcer—in essence, it’s reinforcing strength—was dynamic.

    From the viewpoint of drug abuse research, the dynamic nature of the reinforcing strength of a drug is perhaps not a surprising finding. As discussed above for nicotine, it is known that the abuse of a drug is not an invariant outcome but, rather, is determined by a complex set of pharmacological, environmental, and individual factors. Nonetheless, the work by Spragg, along with similar findings in subsequent research, poses an interesting challenge for investigations in experimental animals, namely, how is the reinforcing strength of a drug best measured? One answer to this question is that the reinforcing effects of a drug should be studied under a wide range of experimental contingencies: drugs that are self-administered under a wide variety of contingencies presumably are more robust reinforcers than those that are self-administered only under limited schedules and conditions (9). However, such comparative pharmacological studies complicate data quantification and place impractical burdens on the time of most investigators. An alternative approach to assessing reinforcing strength is, as Spragg did, to gauge the ability of an event to act as a reinforcer when it is offered concurrently with another event, that is, to offer the subject the choice between two options.

    Box 2.

    The Mechanics of Choice Experiments

    The most important dependent variable obtained using concurrent schedules is the distribution of behavior across the two options, as this reflects preferences for one reinforcer over another. However, it is significant to note that other markers of self-administration behavior are also obtained. To the right are data obtained using a two-lever choice procedure. The figure is a cumulative record of i.v. cocaine self-administration under concurrent FR 30 schedules of food delivery and i.v. injections (see main text for details). Time is shown along the abscissa. Each response on a lever is recorded as a step of the pen upward on the ordinate; reinforcer delivery is shown as a diagonal downward slash of the pen. The pen resets to zero after each of three 30-min components. Green ink shows responding on the injection-associated lever; red ink shows responding on the food-associated lever. The doses available for injection and data values obtained from the session – response distribution, numbers of reinforcers earned, and response rate- are given in the Table below the record. Note that responding is allocated to the different levers depending on the injection available during the session component: when saline injections are available (first component), the subject samples one injection (green) and then allocates all responding to the food-associated lever (red); when a low unit dose of cocaine is available (second component), the subject responds for three injections (green) and then re-allocates responding to the food-associated lever (red). When a higher unit dose of cocaine is available (third component), 92% of all responses occur on the injection-associated lever (green) and no food pellets are earned. In contrast to data presented in Figure 3, response rate data here reflect responding on both levers. Thus, response rates are highest during the first component, when saline is available for i.v. self-administration and responding occurs predominantly on the food-associated lever, and become lower after injections of cocaine are self-administered.



    Choice Procedures

    Findley (19) observed that when a food reinforcer was available under two concurrent schedules of reinforcement, each signaled by distinctive visual stimuli, pigeons preferred the “richer” schedule that allowed more frequent access to food. This preference was made even more explicit by the development of a switching procedure under which subjects could select the schedule of reinforcement that would be in effect during the session. Findley and others showed that, when given a choice, animals consistently choose to respond under the schedule that offers a richer opportunity for reinforcement, regardless of whether this richness was determined by the magnitude or quality of the reinforcer, differences in the frequency of reinforcement, or differences in response cost (20, 21). These early studies firmly established that behavioral allocation was an expression of preference and could be presented as a function of reinforcer magnitude.

    Following in the footsteps of Spragg, but with conscious operational definitions for choice, preference, and reinforcer magnitude, several investigators showed that monkeys would actively choose to receive injections of drug over injections of saline (2224). The procedures used in the early choice studies of drug intake were technically dissimilar and were designed to address different questions. Findley was interested simply in determining whether animals expressed a preference for one drug over another (e.g., secobarbital vs chlordiazepoxide) (23). Other investigators deliberately sought to mitigate the influence of behaviorally disruptive effects of drugs on estimates of reinforcing strength when comparing different doses of the same drug. Iglauer and Woods (24), for example, compared the reinforcing effects of different doses of cocaine that were concurrently available throughout the session, using concurrent variable interval schedules to obtain preference measures that were relatively independent of the frequency of responding (see Box). In the exacting studies of Johanson and Schuster (22, 25), the session began with a sampling period during which the subject was required to self-administer both available i.v. solutions, each associated with fixed-ratio responding on a different lever. The sampling period was followed by eighteen to twenty-five discrete trials during which the animals could complete the fixed-ratio requirement on either lever. Successive trials were separated by fifteen minutes, which minimized the behaviorally disruptive effects of drug doses and permitted the animals to complete all trials. Johanson and Schuster successfully compared cocaine and methylphenidate, analyzing the effects of either drug against saline, of the doses of each drug, and finally of selected doses of cocaine vs methylphenidate. Despite differences in the schedules used in their studies, the two laboratories found that higher doses of drug were preferred to lower doses (24, 25). Johanson and Schuster showed, additionally, that the reinforcing strength of cocaine relative to methylphenidate also was dose-dependent; at appropriate doses, either cocaine or methylphenidate could effectively control the allocation of behavior. These results substantively furthered the idea that the reinforcing effects of drugs varied with dose and that animals are sensitive to differences in the “reinforcing strength” of self-administered drugs.

    The self-administration research described in the preceding paragraph exploited the lawfulness of choice behavior and demonstrated its utility for relating the reinforcing strength of a drug to behavioral and pharmacological parameters. Most notably, the function that related choice (i.e., the distribution of behavior) to dose was monotonic, even in the presence of otherwise behaviorally disruptive effects,and thus clearly showed that higher doses of drugs serve as stronger reinforcers than lower doses. After the seminal parametric studies of cocaine and methylphenidate by Johanson and colleagues, other investigators have used choice procedures to further analyze the reinforcing strength of self-administered drugs, such as cocaine, opioids, sedative-hypnotics, and dissociative anaesthetics (10, 2630). Generally, these studies conclude that the reinforcing strength of self-administered drugs is a dynamic function of multiple pharmacological and environmental variables.

    The advantages of using choice procedures to gauge reinforcing strength in animal studies seem obvious. As mentioned above, the impact of the behaviorally disruptive effects of a self-administered drug is greatly reduced, and monotonic functions relating dose of drug to the distribution of behavior can be easily obtained. Moreover, the more commonly used markers of drug self-administration remain accessible, permitting evaluation of intake and changes in rates of behavior as a function of drug dose. Choice methodology also obviates the need for extinction in order to verify the reinforcing effects of a self-administered drug (see Box). Extinction in laboratory animals typically is characterized by high rates of unreinforced behavior that eventually subside and introduces a behavioral process that is not widely studied in drug self-administration research. Finally, previous studies of food-maintained behavior have shown that overall rates of responding, unlike choice, are relatively insensitive to changes in reinforcement magnitude. For measurement of reinforcing strength, it is thus advantageous to compare the allocation, rather than the frequency, of behavior (Figure 2) (21).

      Figure 2.
    View larger version:
      Figure 2.

      Reinforcing effects of food as a function of the duration of food delivery on either of two available response keys. The abscissa shows the duration of food delivery on one of the response keys (variable key) when the duration is held constant on the second key. The upper ordinate shows the percentage of responding on the variable key (orange circles) and the constant key (green circles). The lower ordinate shows the response rate on both keys. Note that pigeons chose, or preferred, longer access to grain and that the distribution of behavior is not well tracked by changes in response rate. [Adapted from (21).]

      The merits of choice procedures can perhaps be best understood by direct comparison with single-lever procedures. Single-lever self-administration procedures, after all, offer experimental animals a simple Hobbesian choice of allocating responses to the injection-lever or not. When vehicle is in the syringe, the well-trained subject will not respond (extinction). Drug abusers, however, allocate the same money or other currency to drugs that, alternatively, could be used for food or other goods. Moreover, true extinction of drug-seeking behavior rarely, if ever, occurs clinically. Rather, the cessation of drug use generally is defined by abstinence, either forced or unforced. Choice procedures, by measuring reinforcing strength relative to behavior allocated toward alternative reinforcement, may thus more directly mirror clinical realities. Indeed, investigators who conduct self-administration research in human drug abusers commonly use procedures offering subjects a choice between earning vouchers/money or access to drugs (3134). A growing literature shows that strategies to direct choice in the context of drug-taking behavior (e.g., contingency management) can effectively decrease the abuse of some drugs (35, 36).

      Given the benefits of choice procedures for analyzing reinforcing strength, one may question why studies employing choice procedures in laboratory animals are not more prevalent. One practical explanation for this is that training animals to perform complex tasks can take a long time. This can be particularly problematic in i.v. self-administration studies because catheters remain patent for a limited time, often less than six months in rats. For this reason, most choice studies in laboratory animals have been conducted in monkeys [but see (37, 38)]. A more theoretical consideration is that, with the exception of dose comparisons, experimental provision of the alternative reinforcer is arbitrary. Water, used in oral self-administration studies, has the advantage of being the null vehicle condition; however, the reinforcing value of water in non-deprived animals can be difficult to gauge. Food delivery, often used as an alternative reinforcer in i.v. self-administration studies, provides no common metric by which to compare the reinforcing strength of food and drugs. One way around this difficulty is to make comparisons across dose or among drugs with reference to a single alternative non-drug reinforcer when both drug and non-drug options are available under concurrent schedules of reinforcement. In any case, it is important to realize that such comparisons provide only a measure of relative, rather than absolute, reinforcing strength. For example, in recent self-administration experiments, Negus (39) studied the effects of pharmacological treatments in monkeys in which the response requirement for food delivery under baseline conditions was 10 times greater than for i.v. cocaine (FR 100 vs FR 10). Results indicate that equating the response requirements (FR 10 vs FR 10 or FR 100 vs FR 100) shifted the dose-response function for cocaine choice to the right. Relative measures of reinforcer strength thus challenge the researcher to establish which of the possible sets of concurrent schedules might be the most preferable condition for pharmacological experiments. To avoid laborious comparisons, it seems reasonable to formally match as many procedural variables as possible under baseline conditions (24, 40). The use of identical schedules with comparable parameters of reinforcement may minimize methodological bias and promote meaningful analysis reinforcer strength.

      The dynamic aspects of the reinforcing strength of a drug under concurrent schedules of drug and non-drug reinforcement are well illustrated in studies of cocaine. The report that has perhaps received the most notoriety is a choice study suggesting that the reinforcing strength of cocaine is so great that animals will not work for food when cocaine is also available under identical schedule conditions (41). This work was instrumental in raising public awareness of the dangers of cocaine addiction. Subsequent work, however, has more assiduously analyzed the reinforcing strength of cocaine when food delivery is concurrently available [for examples, see (4244)]. These studies have demonstrated conclusively that preference for cocaine, like other reinforcers, is sensitive to parametric manipulation: behavior will be re-allocated to the food option if the dose of cocaine is lowered, if the magnitude of food reinforcement is increased, or if the response cost for cocaine injection is raised. These findings, of course, do not diminish the gravity of cocaine addiction; rather, they point to the types of strategies that may be employed to reduce the relative reinforcing strength of a self-administered drug.

      Application of Choice Procedures

      Currently, there appears to be a growing interest in using choice procedures in drug self-administration studies in experimental animals (29, 30, 42, 4548). The economics of choice in the self-administration of abused drugs can reveal how the allocation of drug self-administration behavior changes in relation to dose or density of reinforcement or to contextual features of the experimental conditions (22, 27, 28, 42, 49, 50). In this work, it is interesting that drug injections, although formally similar to other reinforcing stimuli, exhibit characteristics that differentiate them from non-drug reinforcers. For example, Iglauer and Woods (49) were the first to note that animals, given the option of two doses of cocaine, tended to exhibit an “exclusive preference” toward the higher of two available doses, resulting in a steep dose-effect relationship. These results contrast, for instance, with the more graded effects that are generally obtained when the magnitude of food reinforcement is varied (Figure 3). It is unclear why this should be the case, but it is nonetheless intriguing to consider that the reinforcing strength of cocaine is a pharmacologically quantal measure.

        Figure 3.
      View larger version:
        Figure 3.

        Self-administration by monkeys trained to respond under two concurrent schedules of reinforcement. Monkeys were trained to respond on one lever for food pellets and on an alternate lever for i.v. injections under concurrent FR30;FR30 schedules of reinforcement. The abscissae show the dose per injection. The upper ordinate shows the percentage of responding on the injection-associated lever, whereas the lower ordinate shows the response rate on the injection-associated lever. The availability of i.v. saline (in lieu of drug) results in the allocation of responding to the food-associated lever; the availability of cocaine or heroin results in dose-related increases in the allocation of responding to the injection-lever. In contrast to the monotonic function relating dose to choice (upper panel), response rates first increase and then decrease as a function of dose (bottom panel). Note the similarity between the plots for response rate and for intake (Figure 1).

        Choice methodology has also been used to characterize the effects of novel drugs in experimental animals, either to assess abuse liability or potential utility in treating drug addiction. It is conceptually straightforward to directly evaluate the abuse liability of novel medications by assessing their relative reinforcing strength in relation to appropriate comparators. This approach is derived from earlier analyses in which the reinforcing strength of abused drugs were directly compared (22, 51). More recently, Lile and colleagues compared the relative reinforcing efficacy of cocaine and PTT, the high-affinity blocker of dopamine transport, also in a drug-vs-drug choice procedure. Their results indicate that the reinforcing strength of cocaine could be modified by the availability of the dopamine transport blocker, encouraging its further evaluation as a candidate medication (52). Other studies have utilized the comparison of the relative reinforcing strength of different drugs when offered concurrently with a common non-drug reinforcer such as food. Using this approach, Gasior et al. found that the reinforcing strength of atomoxetine, the novel treatment for ADHD, was negligible compared to cocaine (46). Similar data were obtained with desipramine, whereas more conventional therapies such as amphetamine or methylphenidate had cocaine-like relative reinforcing strength (Figure 4).

          Figure 4.
        View larger version:
          Figure 4.

          Evaluation of addiction liability for four drugs. Comparison of amphetamine, methylphenidate, atomoxetine, and desipramine in animals trained under concurrent FR 30; FR 30 schedules of food delivery and i.v. injections. The abscissa shows the dose per injection. The ordinate shows the distribution of responding on the injection-associated (left, red axis) and food-associated (right, black axis) levers; the dashed line represents 50% responding allocated to each lever, suggesting equal reinforcing strength of the two available reinforcers. Note that the reinforcing strength of some doses of methylphenidate and amphetamine were equal to or greater than that of the alternatively available food pellets. In contrast, food was preferred to all doses of atomoxetine or desipramine. See text for details. [Adapted from (46).]

          A conceptually more difficult area of research concerns the use of drug pretreatment to modify reinforcing strength. Presumably, an effective medication against drug abuse might lessen the reinforcing strength of abused drugs without greatly altering the reinforcing strength of the non-drug alternative reinforcer. With this in mind, choice procedures have been used to evaluate how treatment agents may modify the relative reinforcing strength of i.v. cocaine and other abused drugs (31). To date, no successful therapies have emerged from this approach using human subjects, and existing research in laboratory animals is sparse. Moreover, experimental results are not always consistent with preclinical findings. For example, dopamine receptor blockers such as haloperidol or chlorpromazine do not antagonize the reinforcing strength of cocaine (53), but have been shown to reduce its reinforcing effects in single-option procedures (54, 55). Of interest, dopamine receptor blockers also have not been effective in studies in human cocaine abusers (56, 57), perhaps lending validity to the results obtained with choice procedures.

          One might suppose that the validation of choice procedures for the evaluation of candidate medications for opioid abuse might benefit from the availability of known effective pharmacotherapies such as methadone; however, available data are sparse and somewhat inconsistent. In early studies, methadone was shown to decrease the choice of a fixed dose of heroin in opioid-dependent monkeys (58), paralleling the reduction of heroin use in methadone-maintained clients. However, using a somewhat different choice methodology, methadone was reported to be without effect (59). The reasons for these differing results are not immediately apparent but clearly indicate the need for additional studies of this type.

          Recent thought regarding the management of drug addiction emphasizes the need for medications that may reduce relapse to drug-taking behavior. One approach toward this end has been to evaluate candidate medications in reinstatement procedures. In these procedures, subjects are first trained to self-administer an abused drug (e.g., cocaine) and subsequently, self-administration behavior is extinguished. Operant behavior then can be reinstated by noncontingent delivery of the self-administered drug, and candidate medications can be assessed for their ability to reduce the reinstated behavior. These procedures have been widely used, even though some concerns have emerged regarding the interpretation of such extinction-based results (60). As discussed above, extinction is a rare event in human drug abuse, and it may be expedient to evaluate candidate medications using procedures that do not involve extinction. With this in mind, it is of interest that the substitution of vehicle for drug availability in choice methodology, rather than extinguishing behavior, usually results in high rates of responding for the alternative reinforcer. Based on this observation, we recently investigated the effects of noncontingent injections of drugs on the distribution of choice behavior when food and vehicle were concurrently available (61). Our results indicate that noncontingent administration of the self-administered drug, cocaine, led to the re-allocation of behavior toward self-administration. These effects were dose-dependent, time-dependent, and highly reproducible, even though responding resulted only in injections of vehicle. Similar effects were observed with pharmacologically related drugs but not with pharmacologically dissimilar drugs.Although we have not yet determined whether these effects can be reduced by candidate therapeutics for cocaine abuse, such results nonetheless encourage the idea that choice procedures may serve for evaluating anti-relapse medications, absent concerns related to behavioral extinction.

          Conclusion

          In many ways, studies that use the allocation of operant behavior to evaluate preference between reinforcers appear to be well-suited for studying the reinforcing strength of drugs. In this regard, though all the studies discussed above were conducted in experimental animals, choice studies also have served as the gold standard for self-administration procedures conducted in human subjects (31, 32, 53, 62). Unlike studies in human subjects involving choice between drugs and money, the alternative reinforcers in studies with laboratory animals usually are not fungible. This is a drawback that has been largely overcome by identifying reinforcing strength as a relative measure. What has become abundantly clear is that, as in the human population, the relative reinforcing strength of self-administered drugs in the laboratory is highly sensitive to both pharmacological and environmental manipulations. Further refinement of these laboratory methods holds promise for assessing abuse liability and for developing pharmacotherapies that may be useful in diminishing the reinforcing strength of abused drugs.

          Acknowledgments

          Research on the reinforcing effects of drugs in the authors’ laboratories is supported by the National Institute on Drug Abuse within NIH (grants nos. DA03774, DA10566, and DA 15723). The authors thank W.H. Morse, J.L. Katz, and W.L. Woolverton who have shared their insights and musings on the subject of this review.

          References


          Carol A. Paronis, PhD, is Assistant Professor of Psychobiology in the Department of Psychiatry at Harvard Medical School. She is Co-Director of the Preclinical Pharmacology Laboratory within the Mailman Research Institute at McLean Hospital. Address correspondence to CAP. E-mail cparonis{at}hms.harvard.edu; fax 617-855-2417.


          Jack Bergman, PhD, is Associate Professor of Psychobiology in the Department of Psychiatry at Harvard Medical School and McLean Hospital. He is Director of the Behavioral Pharmacology Laboratory within the Alcohol and Drug Abuse Center and Co-Director of the Preclinical Pharmacology Laboratory within the Mailman Research Institute at McLean Hospital.

          | Table of Contents