In Dan Brown's mega-bestselling 2003 mystery thriller "The Da Vinci Code," there's a bit of repartee in the book between the book's hero, Robert Langdon, and cryptographer Sophie Neveu, in which she expresses skepticism about value "of religious believers living by faiths that include miraculous occurrences. It appears their reality is false," she sneers.
Langdon laughs, and says that those beliefs are no more bogus "than that of a mathematical cryptographer who believes in the imaginary number 'i' because it helps her break codes."
As it turns out, though, an imaginary number – basically, a number that, when squared, results in a negative number – really is a thing in mathematics, first discovered back in the 1400s and 1500s as a way to solve certain bedeviling equations. While initially thought of as sort of a parlor trick, in the centuries since, they've come to be viewed as a tool for conceptualizing the world in complex ways, and today are useful in fields ranging from electrical engineering to quantum mechanics.
"We invented imaginary numbers for some of the same reasons that we invented negative numbers," explains Cristopher Moore. He's a physicist at the Santa Fe Institute, an independent research institution in New Mexico, and co-author, with Stephan Mertens, of the 2011 book "The Nature of Computation."
"Start with ordinary arithmetic," Moore continues. "What is two minus seven? If you've never heard of negative numbers, that doesn't make sense. There's no answer. You can't have negative five apples, right? But think of it this way. You could owe me five apples, or five dollars. Once people started doing accounting and bookkeeping, we needed that concept." Similarly, today we're all familiar with the idea that if we write big checks to pay for things, but don't have enough money to cover them, we could have a negative balance in our bank accounts.