A doctor examines a patient in a drug study conducted by RapidTrials, a company known for finding ways to avoid delays in clinical trials, one of the most expensive parts of developing new products.

­AP Photo/Mark Stehle

Modern Clinical Drug Trials

Walk into a Phase I U.S. clinical trial for a new drug and you'll likely find something between a frat house rec room and a hospital. Far from suffering, most of the test subjects would likely be busy watching TV or blasting through a few video games. As it turns out, catheters, needles and the occasional invasive surgical procedure don't really spoil a good time, especially when you're getting paid for it.

­These mini-vacations for science can last anywhere from days to weeks and can pay in the thousands of dollars. Sometimes they're even held in hotel rooms. Phase I clinical trials typically involve otherwise healthy (though generally not gainfully employed) individuals. In this stage, researchers try to pinpoint dangerous side effects and potential complications. Phase II trials deal with dosing and efficiency, while Phase III trials enlist the help of actual patients to compare the experimental treatment to conventional ones, placebos or both.

A great deal of money and time go into these tests because pharmaceutical companies have a limited window to get a new drug out on the market and profit from it. U.S. drug patents only last 20 years; if a new medication is tied up in testing for a decade, then it will only have 10 moneymaking years left in it. While the companies themselves are frequently criticized for their commercialism, it does take a considerable financial investment to see a medical discovery all the way to the point where it can help patients -- even with limited or no human testing. Due to financial or logistical reasons, many potential medical breakthroughs don't even make it through the experimental period, which may be why researchers dub it "the valley of death."

Pharmaceutical companies used to rely more on university research facilities or teaching hospitals -- which, in turn, gave them access to students who might appreciate a spring break full of experimental psychotropic drugs and repeat viewings of "The Wall." The downside to this, however, was that it introduced academic bureaucracy into an already highly regulated process. The FDA required the tests to be supervised by an institutional review board, and these were generally staffed by university faculty.

­To streamline this ordeal, pharmaceutical companies now deal largely with commercial contract research organizations, which handle all the testing. In a 2008 article in The New Yorker, Carl Elliot described the resulting situation as a subculture of guinea pigs -- they even have their own publications, detailing which studies have the best pay and perks.