Scientific Notation Is Math's Version of Shorthand

By: Mark Mancini & Yara Simón  |

Astronomers estimate there are at least 120 sextillion stars in the observable universe. By most accounts, that's a seriously impressive number. One sextillion is written out as a "1" followed by 21 zeros. And when we commit 120 sextillion to paper numerically, it looks like this: 120,000,000,000,000,000,000,000.

One sextillion is a staggering number that can be challenging to comprehend. Luckily for us, scientific notation provides a vital solution for expressing such minuscule or significant figures in a concise, standardized format. Instead of dealing with long strings of zeros, it offers a more manageable way to represent these numbers, making complex calculations and comparisons far more accessible.

Now let's explore the ins and outs of scientific notation and check out some real-world examples.

Contents

What Is Scientific Notation?

Scientific notation is a concise and standardized way of expressing extremely large or tiny decimal numbers efficiently. This type of notation has two essential components: a coefficient (typically a number between 1 and 10) and an exponent (an integer power of 10), where the coefficient's absolute value signifies the number's magnitude.

Think of scientific notation as a recipe card. The recipe card's title and picture (analogous to the coefficient) give you a general idea of the dish, whether it's large or small, but they don't provide the detailed instructions. The cooking time (similar to the exponent) tells you exactly how long to cook the dish, guiding you to the right outcome.

So, the coefficient's absolute value is like glancing at the recipe card's title and picture to understand the dish's magnitude, while the exponent is like the cooking time, directing you precisely for the desired result. Let's take a closer look at coefficients.

Coefficients

In simple terms, a coefficient is a number that is multiplied by another number or a variable in a mathematical expression. It's like a numerical factor that tells you how many times to multiply the variable or number it's associated with.

Exponents

An exponent is a small number that tells you how many times to multiply a larger number (called the base) by itself. For example, in the expression 23, the base is 2, and the exponent is 3. It means you should multiply 2 by itself 3 times: 2 x 2 x 2, which equals 8.

Exponents are used to represent repeated multiplication in a more concise way, making it easier to work with very small or very significant digits in mathematics.

More Examples of Scientific Notation

As any bank teller should know, 100 is equal to 10 x 10. But instead of writing "10 x 10" out, we could save ourselves some ink and write 10² instead. What's that itty-bitty "2" next to the number 10?

That's what's called an exponent. And the full-sized number (i.e., 10) to its immediate left is known as the base. The exponent tells you how many times you need to multiply the base by itself. So 10² is just another way of writing 10 x 10. Similarly, 10³ means 10 x 10 x 10, which equals 1,000.

(By the way, when solving math problems on a computer or scientific calculator, the caret symbol — or ^ — is sometimes used to denote exponents. Hence, 10² can also be written as 10^2, but we'll save that conversation for another day.)

Scientific notation relies on exponents. Consider the number 2,000. If you wanted to express this sum in scientific notation, you'd write 2.0 x 103. When you use scientific notation, what you're really doing is taking a number (i.e., 2.0) and multiplying it by a specific exponent of 10 (i.e., 10^3).

The exponent (i.e., 3) signifies that you're multiplying the coefficient (i.e., 2.0) by 10 raised to the power of 3, effectively moving the decimal point three places to the right, resulting in the same sum we started out with: 2,000.

Scientific Notation vs. Decimal Notation

Decimal notation and scientific notation are two ways of writing numbers. In decimal notation, numbers are written with a standard series of digits and a decimal point, making it suitable for everyday use. For example, 3,500 represents 3,500 units; 22.5 represents 22 1/2 units.

In contrast, scientific notation is designed to handle extremely large or small numbers more efficiently. It consists of a coefficient between 1 and 10, multiplied by a power of 10. For instance, 3.5 x 103 is equivalent to 3,500.

The key difference lies in how they handle scale: Decimal notation is more intuitive for everyday numbers, while scientific notation is ideal for simplifying complex calculations with numbers spanning vastly different magnitudes.

Applications of Scientific Notations

Scientific notation is widely used to express very large or very small numbers in a compact and standardized form. Here are some real-world examples:

• Astronomical distances: The distance from the Earth to the sun is approximately 93 million miles. In scientific notation, this is written as 9.3 x 107 miles.
• Atomic sizes: The size of an atom is incredibly small, around 0.0000001 meters. In scientific notation, this becomes 1 x 10-7 meters.
• Speed of light: The speed of light in a vacuum is about 299,792,458 meters per second. In scientific notation, it's 2.99792458 x 108 m/s.
• Population counts: The world population, which exceeds 8 billion, can be expressed as 8 x 109.
• Microorganisms: A typical bacterium might have a mass of 0.000000000001 grams, which can be written as 1 x 10-12 grams.

Other Types of Notation

A notation is a system of symbols, signs or characters used to represent or convey information, often in a structured and standardized way. Here are some common types:

• Binary notation: Uses base-2, e.g., 10101 (binary representation of 21); fundamental in computer programming and digital systems
• Decimal notation: Uses digits and a decimal point for precision, commonly seen in everyday numbers and measurements
• Engineering notation: Simplifies large and small values for engineers — for instance, 2.5 kΩ (kilo-ohms) — aiding in engineering calculations
• Exponential notation (E notation): Represents values as a coefficient and exponent, like 5.6 x 103; suitable for very large or small numbers in scientific contexts
• Fractional notation: Represents numbers as fractions, like 1/2; useful for precise divisions and ratios
• Hexadecimal notation: Represents numbers in base-16, such as 1A (hexadecimal representation of 26); frequently used in computer science for memory addresses and color codes
• Percent notation: Expresses values as percentages, such as 50 percent, making comparisons relative to 100 units.
• Roman numerals: Uses letters like X for 10 and XL for 40. Often used in historic dates and formal titles, adhering to specific rules for numerical representation.

A Sextillion by Another Name

All right, time to have some fun. Through the steps we outlined above, we can use scientific notation to express 4,000 as 4.0 x 103. Likewise, 27,000 becomes 2.7 x 104 and 525,000,000 turns into 5.25 x 108.

But how we could convert 120 sextillion, that giant, unwieldy number from our opening sentence? First, take a good, hard look at 120,000,000,000,000,000,000,000. Altogether, there are 23 digits behind the "1." (Go ahead and count 'em up. We'll wait.)

Ergo, in scientific notation, 120,000,000,000,000,000,000,000 is expressed as 1.2 x 1023.

But admit it, the latter is way easier on the eyes. Besides, the exponent gives you an immediate sense of how ginormous the total number really is. And it does so in a way that tallying up the zeros never could. Such is the simplifying beauty of scientific notation.

Going Negative

You'll be happy to know this process can be applied to negative numbers, or numbers that are smaller than one.

Suppose you've only got one-tenth of an apple. Mathematically that means you have 0.10 apples at your disposal. Likewise, if there's only one-millionth of an apple on your lunch tray, you're dealing with a paltry 0.000001 apples. Tough break.

There's a way to write this sum down using scientific notation — and it's not all that different from the technique we've been practicing.

Here, we'll need to take the existing decimal point and put it to the right of the number's first non-zero digit. Do that and you'll wind up with a plain old "1." In the name of mathematical clarity, we'll write this as "1.0."

OK, so in order to get 0.000001, we'll need to multiply our 1.0 by another exponent of 10. But here's the twist: The exponent will be a negative number.

Take another gander at 0.000001. See how there are six digits behind the decimal point? That forces us to multiply our 1.0 by 10-6. So in summary, 1.0 x 10-6 is how we express one-millionth, or 0.000001, in scientific notation.

By the same token, 6.0 x 10-3 means 0.006. Accordingly, 0.00086 would be written as 8.6 x 10-4. And so on. Happy calculating.

This article was updated in conjunction with AI technology, then fact-checked and edited by a HowStuffWorks editor.