AI 101: What are FLOPS in AI?

what are FLOPS in AI

Along with an LLM’s “size” in parameters, FLOPS are one of the more confusing terms batted around in AI circles. So: What are FLOPS in AI? In Machine Learning? In supercomputers?

Essentially, a FLOPS is a unit of measurement for raw performance of any computer, or chip, or datacenter, and in particular it is used as the raw metric by which the speed of supercomputers — or more commonly, supercomputing clusters — are ranked.

FLOPS Defined

FLOPS is a loose acronym that stands for FLoating-point OPerations per Second. A single floating-point operation is a mathematical computation between two numbers with long decimal values (thus the “floating point“), such as:

89.72134546 x 4.000121278 = ?

Astonishingly (and we have come to expect, even demand this), modern computers are capable of performing bazillions of these calculations — floating point operations per second of human time. I use “bazillion” as a stand-in term.

But you get the point. Computer’s performance in FLOPS essentially is a rough measure of how much algorithmic horsepower any given system can deliver, per second. how much The numbers are actually so huge that they are well outside most of our vocabulary limits.

So, basically, we use FLOPS for performance measurement of the world’s fastest supercomputers, datacenters and large-scale compute clusters. This is also known in the industry as HPC, or “high performance computing.”

Technically, the performance metrics are expressed in terms very similar to phone storage and hard drive capacities: words like gigaFLOPS, teraFLOPS, and yottaFLOPS.

sample FLOPS prefixes and powers

Simply:

1 gigaFLOPS = a billion FLOPS = 1,000,000,000 FLOPS
[that reads as 10^9, or 10 with 9 zeros]
[ever heard of “three commas”? there you go]

1 petaFLOPS = a million gigaFLOPS = 1,000,000,000,000,000 FLOPS
[that reads as 10^15, or 10 with 15 zeros]

1 yottaFLOPS = 1,000,000,000,000,000,000,000,000 FLOPS
[that’s one billion petaFLOPS]
[that reads as 10^24, or 10 followed by 24 zeros]

🤯
[yes these numbers explode the human brain , and are essentially incomprehensible…they are basically, GINORMOUS BEYOND BELIEF]

For the curious:

What are FLOPS in AI? Prefixes & Exponents

prefix FLOPS example notes
gigaFLOPS 10^9 a billion FLOPS
teraFLOPS 10^12 a trillion FLOPS
petaFLOPS 10^15
exaFLOPS 10^18 Frontier — world’s fastest supercomputer 2023 Frontier runs at 1.2 exaFLOPS
zettaFLOPS 10^21
yottaFLOPS 10^24 100 yottaFLOPS is the AI compute limit where the federal government requires registration
brontoFLOPS 10^27
geoFLOPS 10^30 that’s a 1 followed by 30 zeros. That’s… universe-sized. every second.

How much electricity does a supercomputer use?

Now, we might wish to look at POWER CONSUMPTION of such supercomputing monsters. This has to do with a second calculation, which is essentially: how many FLOPs can we deliver for the power draw of a 60w lightbulb?

As you might imagine, as compute power scales, these numbers get astronomical. It is not uncommon for a major datacenter to have a continual power draw equivalent to a city of 100,000 people or more (homes and businesses, all inclusive). Witness:

Today the TOP500 (list of the 500 fastest supercomputers in the world)  is led by Frontier with 1.102 exaFLOPS, and it draws a continuous 21.1 MW to run. While maintaining this level of energy efficiency, achieving 1 zettaFLOPS would require 21 GW (gigaWatts).

Put in laymans terms: to power that single ZettaFLOP supercomputer, 21 large-scale nuclear power plants with a capacity of 1000 MW each will be required (an average nuclear power plant generates half that, or 500 MW).

However…

Compute Power Efficiency is Improving too

Staring at this energy abyss, and facing the spectre of Climate Change, humans are creating ever more power-efficient computing processors.

While not as hard-ramping as Moore’s law, there are still miraculous improvements in energy efficiency that engineers are inventing. Compute efficiency is typically measured in some multiple of FLOPS / watt, where watt is the unit of electrical power consumption.

If the current rate of increase in energy efficiency is maintained, this figure will reach only 2140 gigaFLOPS / watt by 2035, so a hypothetical zetta cluster will still require 500 MW: a dedicated nuclear power plant, for that one single supercomputer.

Plausible, methinks.

What are FLOPS in AI? Hope you get it by now. Back to work!

.


key prompt: “make me a picture. the idea is to represent the exponential growth of supercomputing power across the decades. in large letters, it should include the word ‘zFLOp/s.”

    • prompt (adjust 1): “thats really nice. just one mod. make the type bigger, and the flow of the curve should be “up and to the right.” go.”
    • prompt (adjust 2): “that’s great. now add really big letters that read ‘gFLOP/S’”
    • prompt (final adjust): “looks too 70s-ish. make it more chalkboard style. and the word ‘gFLOP/s’ should be +huge+, like 1/4 the height of the total image’”

engine: DALL-E3, OpenAI

, ,

Exit mobile version