Predicting how a drug behaves in the body can be accomplished through mathematical modeling of the time course of the drug in the body, or pharmacokinetics. Simplistically, pharmacokinetics describe what the body does to the drug, whereas pharmacodynamics describe what the drug does to the body. Pharmacokinetics are determined by following changes in plasma drug concentrations after a dose of the drug is administered at least via the desired route and ideally also after IV administration (100% bioavailability). The time course of plasma drug concentrations is mathematically "modeled" such that physiologic events impacting the changes in drug concentration might be determined. Most pharmacokinetic studies are conducted in healthy animals, yet dosing regimens should be individualized to adjust for physiologic (age, gender, species, and breed), pharmacologic (drug interactions), or pathologic (eg, renal or hepatic disease) differences or for animals receiving multiple drugs.
The pharmacodynamic response to a drug generally reflects the number of receptors with which the drug interacts (drug-receptor theory). In most instances, tissue drug concentrations parallel plasma drug concentrations.
After intravenous administration, the most relevant pharmacokinetic parameters that describe a drug and provide a basis for the dosing regimen are the apparent volume of distribution and the plasma clearance, both of which determine the elimination rate constant and elimination half-life. Additional parameters include the distribution rate constant and half-life and, if the drug is also given orally, the absorption rate constant and half-life.
After a drug is administered by rapid IV (eg, bolus) injection, the drug will be immediately distributed to the "central" vascular compartment, which includes highly perfused organs. Also immediately, plasma drug concentrations decline for two reasons: distribution of drug from plasma into tissues and back, and elimination from the body because of irreversible removal (ie, metabolism or excretion). As such, the decline in plasma drug concentrations initially is rapid, but once distribution reaches a "pseudo" equilibrium such that the amount of drug moving into tissues equals that moving back into plasma, plasma drug concentration will decline only because of elimination from the body (metabolism and excretion). Each drug movement is "first order," meaning a constant fraction or percentage (rather than amount) moves per unit time. As such, the time course of the drug in plasma must be plotted semilogarithmically (y = log plasma drug concentration, x = time). The result will be a decline in plasma drug concentrations that can be "fitted" by two lines: the first line, representing distribution and elimination, which declines very rapidly (steep slope), followed by a second "terminal" line, with a flatter slope because it represents elimination only. If the terminal line is drawn back to the y-axis, the Y intercept represents the plasma drug concentration after distribution has reached pseudoequilibrium ("B"). The slope of this terminal line is the elimination "rate constant," kel, and from it is derived the elimination half-life, t1/2. The line that describes distribution also is characterized by a Y intercept (the drug concentration in the "central" compartment, or "A") and a "distribution" rate constant (often referred to as α or kd); it is from this rate constant that a distribution half-life can be determined. Once the curve is mathematically described by slopes and Y intercepts, given any time (t) point after the drug is administered, the amount of drug in the body or the plasma drug concentration (Cp) can then be predicted (Cp) = Ae-at + Be-kt in which e is the natural log. From this data, clinically relevant parameters that influence the dosing regimen are then determined.
If both the dose (mg/kg) and the drug concentration in plasma (the Y intercept of the terminal component of the plasma drug concentration [PDC] versus time curve, or "B") are known, then an "apparent" volume of distribution can be calculated from Vd = dose/PDC. This theoretical volume describes the volume to which the drug must be distributed if the concentration in plasma represents the concentration throughout the body (ie, distribution has reached equilibrium). The term "apparent" underscores the fact that where the drug is distributed cannot be determined from Vd; only that it goes somewhere. The pharmacokinetic measure used to indicate the pattern of distribution of a drug in plasma and in the different tissues, as well as the size of the compartment into which a drug would seem to have distributed in relation to its concentration in plasma, is known as the apparent volume of distribution (Vd). It is usually reported as liters per kilogram (L/kg) and is determined by measuring peak plasma drug concentrations after distribution has reached equilibrium for a drug administered IV. That concentration is compared with the IV dose that was given: Vd = dose/PDC. For example, if 12 mg/kg of phenobarbital is given, and the resulting PDC after distribution has occurred is 20 mg/L, then the apparent volume to which phenobarbital is distributed (Vd) = (12 mg/kg/20 mg/L) or 0.6 L/kg.
The Vd is useful for three reasons. First, and perhaps most importantly, it can be used to calculate a dose if the target PDC is known: dose = Vd × target PDC. For example, if the Vd of phenobarbital is 0.6 L/kg, and the target concentration of phenobarbital in a drug-naive animal is 10 mg/L, the IV dose would be 10 mg/L × 0.6 L/kg or 6 mg/kg. Second, if the PDC at any time after the dose is known, the Vd can be used to calculate how much drug is left in the body. Finally, the Vd can be used to predict the relative ability of the drug to distribute to different body compartments: if the drug is limited to the extracellular compartment (interstitial fluid, plasma), as is typical of water-soluble drugs, this represents 20%–30% of the body weight, and the Vd of such a drug should be <0.3 L/kg. Lipid-soluble drugs are generally able to penetrate cell membranes and thus are distributed to both extracellular and intracellular fluid, which represents ~60% of the body weight. Such drugs are generally characterized by a Vd >0.6 L/kg. Some drugs are limited to the plasma compartment and do not distribute well. An example would be a drug very tightly bound to plasma proteins; for such drugs, the Vd approximates the size of the blood compartment, or >0.1 L/kg. However, as the drug is freed from the protein, it will leave the plasma compartment and distribute into tissues. Many drugs are characterized by a Vd that exceeds the body weight of the animal (ie, >1 L/kg). For example, the mean digoxin Vd in dogs is 13 L/kg. This points out that the basis to determine Vd is PDC: if the drug leaves the plasma, regardless of where it goes, the Vd will increase. For digoxin, the drug binds to cardiac tissue; however, this is known only because follow-up studies demonstrated as such.
The Vd of a drug is usually constant over a wide dose range for a given species. However, a number of clinically significant factors can influence the Vd, including age (larger in neonates and pediatrics, smaller in geriatrics); functional status of the kidneys (decreased with dehydration), liver (increased with edema), and heart; fluid accumulations; concentration of plasma proteins (influencing unbound drug only); acid-base status (particularly if ion trapping causes the drug to accumulate in tissues); inflammatory processes or necrosis; and any other causes for alteration in the degree of plasma-protein binding.
As soon as a drug reaches the systemic circulation, it immediately begins to be cleared from plasma. Clearance is the volume of blood from which a drug is irreversibly eliminated, or cleared. Plasma is most commonly sampled; however, plasma clearance represents the sum clearances by all organs. If the drug is cleared by only a single organ, then plasma clearance is the clearance of that organ. Clearance is a volume and, as such, its units are volume/mass/time (eg, mL/kg/min). An alternative definition of clearance is the volume of plasma that would contain the amount of drug excreted per unit time; this definition demonstrates the link between volume of distribution (Vd) and clearance: if the elimination rate constant is known, it describes the fraction of the Vd cleared and, together, they can be used to calculate clearance (CL) from the PDC vs time curve: CL = Vd × kel/time (min). As such, like Vd, CL directly influences kel, ie, rate at which drug is eliminated from the body: as CL increases, kel becomes more steep. Clearance is independent of the Vd of a drug and thus the concentration of drug in the blood; no matter how much drug is in the blood, the same volume will be cleared per unit time.
The two major organs responsible for clearance are the liver and kidneys. Once a drug is metabolized, it is irreversibly eliminated from the body. Its metabolites, however, must be excreted (usually by the kidneys). Hepatic clearance is defined as the volume of plasma totally cleared per unit time as blood passes through the liver. The rate of hepatic clearance depends on drug delivery to the liver, ie, blood flow (Q) and the extraction (E) ratio of the drug, or fraction of the drug removed as it passes through the liver. Extraction, in turn, is determined by the intrinsic clearance (metabolic capacity) of the liver. Drugs cleared by the liver fall into two major categories. "Flow-limited" drugs are extracted so rapidly that Q becomes the limiting factor of hepatic clearance. Binding to plasma proteins will not influence clearance of such drugs. In contrast, the rate-limiting step of "capacity-limited" drugs is intrinsic clearance, ie, the metabolic capacity of the liver. For such drugs, binding to serum proteins will decrease the rate of clearance. As such, highly protein-bound drugs are referred to as "capacity limited, binding sensitive" as opposed to drugs not highly protein bound and thus "capacity limited, binding insensitive."
Hepatic disease differentially impacts flow- and capacity-limited drugs. Hepatic clearance of flow-limited drugs will markedly decrease with changes in hepatic flow such as might occur with portosystemic shunting. When administered orally, such drugs are normally characterized by a high first-pass metabolism and reduced oral bioavailability. With portosystemic shunting, oral bioavailability can markedly increase and, as such, oral doses must be decreased in proportion to the shunted blood. Changes in hepatic mass and function will impact capacity-limited drugs. In general, if liver disease has negatively impacted serum albumin and BUN, the intrinsic metabolic capacity of the liver is also likely to be negatively impacted. However, if protein-binding decreases for a highly protein-bound drug such that more of the drug is unbound, hepatic clearance may not be as negatively impacted.
Renal clearance is defined as the volume of plasma totally cleared of a drug per unit time (eg,1 min) during passage through the kidneys. The renal clearance of drugs depends primarily on renal blood flow but also is impacted by urine pH, extent of plasma-protein binding, urine concentrating ability, and concomitant use of certain drugs. Serum creatinine or serum creatinine clearance can be used to assess changes in renal clearance as renal function declines. Either the dose or interval can be proportionately modified. For drugs with a short half-life, intervals are more appropriately prolonged (compared with decreasing dose) as serum creatinine increases; for drugs that accumulate because of a long half-life, the dose or interval might be proportionately decreased or prolonged, respectively.
Among the most commonly cited pharmacokinetic parameters is the elimination half-life. It is derived from the elimination rate constant, kel, which is the slope of the terminal, or elimination, component of the PDC vs time curve. A "hybrid" parameter, kel is impacted by both CL and Vd. CL determines the decline in PDC; thus, the greater the volume of drug cleared, the steeper the slope, or kel. The impact of Vd on half-life reflects its effect on PDC: a larger Vd means less drug is in the volume of blood cleared by the liver or kidneys. As such, the rate of elimination declines as Vd increases, resulting in an inverse relationship. The elimination half-life of a drug is the time that lapses as PDC declines by 50%. It is calculated from the slope of the line kel: t1/2 = 0.693/kel. The relationship between kel and half-life reflects the fact that half-life becomes run of the slope (t2-t1) as C1 declines by 50% (ie, C1/C2 = 2). The natural log of 2 = 0.693. Because t1/2 is the inverse of kel, then t1/2 is directly proportional to Vd (larger Vd results in a longer half-life) and inversely proportional to CL. Note that CL and Vd can be profoundly altered, yet t1/2 may not change. For example, in an animal dehydrated because of renal dysfunction, CL may be decreased by 50%, doubling t1/2. However, if the animal is markedly dehydrated, then Vd will decrease because of contraction of extracellular fluid volume. Because more drug is in each mL of blood cleared by the kidney, the same amount of drug may be eliminated, and as such kel or t1/2 may not change.
The elimination half-life determines the time to steady-state (see below) and the time for a drug to be eliminated from the body once drug administration is discontinued. Once a drug is discontinued, 50% of the drug is eliminated in one half-life, 75% in the second (half of 50%), 87.5% in the third, and so on. For practical purposes, most drug is eliminated by 3–5 half-lives. The t1/2 along with tolerances determines withdrawal or milk discard times in food animals. The relationship between dosing interval and elimination half-life also determines whether a drug will fluctuate or accumulate during a dosing interval.
When a drug is administered by an extravascular route, plasma drug concentrations rise until a peak or maximum drug concentration (Cmax) is reached. Once the drug enters circulation, it is subjected immediately and simultaneously to distribution, metabolism, and excretion. The plasma drug concentration vs time curve after extravascular administration has an additional Y intercept and slope, with the slope reflecting the rate constant of absorption, ka. The absorption half-life is the time that elapses as 50% of the drug is absorbed into the system. Absorption generally is sufficiently slow that drug distribution is generally "masked" by the absorption phase. As such, as plasma drug concentrations decline after Cmax is reached, the slope generally reflects kel .
The term “bioavailability” is used to express the rate and extent of absorption of a drug (see above).
In some cases, the desired therapeutic effect of a drug is produced with a single dose. However, to achieve a satisfactory response, it is frequently necessary to maintain drug concentrations in the therapeutic range for a longer time. Rather than administering large doses, which could result in potentially toxic plasma drug concentrations, multiple dosing occurs at regular, safer intervals. For drugs with a very short half-life, the drug may be administered through a catheter as a constant-rate infusion, which is essentially continuous IV delivery. The rate of administration depends on the amount of fluctuation in drug concentration that can occur during a dosing interval, which in turn is determined by the relationship between t1/2 and the dosing interval, Τ.
If a drug is administered at an interval substantively longer than its half-life, most of the drug will be eliminated during each dosing interval. As such, little drug remains with the subsequent dose, and plasma drug concentrations will fluctuate (Cmax to Cmin) during the dosing interval. For example, if a drug with a 4-hr half-life is administered every 12 hr, 87.5% of the drug will be eliminated during each dosing interval. With each dose, there is a risk of drug concentrations becoming subtherapeutic; increasing the dose will result in a small increase in Cmin but may substantially increase Cmax, thus increasing the risk of toxicity. A more appropriate response would be to decrease the dosing interval. However, this may be necessary only if drug efficacy depends on the presence of the drug. For example, this degree of fluctuation may be acceptable for a concentration-dependent antimicrobial. However, if the drug is an anticonvulsant, the risk of seizures increases just before the next dose. If the drug is time-dependent, drug concentrations may drop below the minimum inhibitory concentration of the infecting microbe. In contrast to drugs with a short half-life, drugs with a long half-life compared with the dosing interval will accumulate with each dose, because much of the drug remains in the body when the next dose is given. Such drugs will begin to accumulate with the first dose and will continue to do so until a "steady state" equilibrium is reached such that the amount of drug eliminated during each dosing interval is equivalent to the amount of drug administered during that same interval. The accumulation ratio describes the magnitude of increase of either Cmax or Cmin at steady state compared with the first dose. The longer the half-life compared with the dosing interval, the greater the accumulation ratio. The time to steady state, regardless of the drug or dose is 3–5 drug elimination half-lives. However, administration must occur with the same preparation at the same dosing regimen. In such cases, 50% of the plateau or steady-state concentration will be reached in one t1/2, 75% at two t1/2 , 87.6% at three t1/2, and 93.6% at four t1/2. As with drug elimination, for practical purposes, steady state is achieved by 3–5 half-lives. Response to the drug, whether efficacy or toxicity, cannot be assessed until steady state is reached. Because the amount of drug in the body is large compared with each dose, manipulating plasma drug concentrations for such drugs is difficult, because changes require dosing for 3–5 half-lives at the new dose.
If the time to reach steady state, and thus time to therapeutic effect, is unacceptable, steady-state plasma drug concentrations may be achieved more rapidly by administration of a loading dose or doses (dose = Vd × target concentration; if the drug is given orally, dose = (Vd/F) × target concentration. However, the drug is not at steady state but only at steady state concentrations. If the maintenance dose does not maintain what the loading dose achieved, then as steady state at the maintenance dose is reached, plasma drug concentrations may increase to toxicity or decline to a subtherapeutic concentration. Drugs with very short half-lives are often administered by constant-rate infusions in animals in critical condition; in such cases, the interval is infinitely short compared with the half-life, and the drug will accumulate until steady state is reached. The rate of infusion (mcg/kg/min) is equal to the CL (mL/min/kg) × target concentration (mcg/mL); a loading dose should be given if the time to steady state is unacceptably long.