What Is a TFLOP and What Is It Good For?

The performance of a computer isn’t easily quantifiable. A PC that’s great at one task might be just average at another, while a modern desktop might be more capable than a supercomputer from a decade ago, depending on the tasks you set it. Clock speeds, core counts, and even instructions per second aren’t always directly comparable either. Floating-point operations per second, or FLOPS, and more recently TFLOPS are a measurement that can cross generations and even different components, to give a firmer measurement of what a computer is can do.

How Do TFLOPS Affect Performance?

Floating-point arithmetic is a method of computation that takes some measure of a trade-off between accuracy and performance. As a metric, FLOPS is a measurement of how many of these calculations can be made per second, and includes 16-bit (half-precision), 32-bit (single precision), and 64-bit (double precision). Different tasks make use of different types of FLOPS, with gaming focusing on single-precision, and more scientific tasks and AI computation taking advantage of double-precision FLOPS. Whatever task you’re performing, however, the modern devices that you use to perform them are so fast, that their performance isn’t measured in FLOPS, but in TeraFLOPS (TFLOPS). Each one TFLOP represents one trillion calculations per second. For the past decade, TFLOPs have been one of the major ways to measure performance (particularly graphics card performance). AMD released the first TFLOPS-capable graphics card in 2008, breaking the two TFLOPs barrier that same year. Modern graphics cards and games consoles are far more capable than this, delivering many times the TFLOPs of those aged GPUs. A brand new RTX 3090 is rated for over 36 TFLOPS of shader performance. Mobile GPUs, like the Radeon Pro 5600M found in Apple’s MacBook Pro is more modest at around 5.3 TFLOPS.

PS5 Teraflops vs. Xbox Series X Teraflops

The next-generation games consoles from Sony and Microsoft, the PS5 and Xbox Series X, are expected to be the most capable games consoles ever. Both consoles use a custom AMD APU (Accelerated Processor Unit) combining eight Zen 2 CPU cores and a custom RDNA2 graphics core. With such comparable hardware, TFLOPS becomes a somewhat useful way to measure their capabilities. The PS5 teraflops rating is 10.28 for the graphics processor, while the Xbox series X is expected to come in around 12 teraflops. Compared to last generation consoles, this is a big uplift. The Xbox One X was capable of six teraflops of single-point precision, while the PS4 Pro could handle just 4.2 TFLOPs.

The Limitations of TFLOPS

As useful as TFLOPS can be, they only consider one aspect of a graphics card, or games console’s performance and raw potential. They don’t factor clock speed, architecture, core count, process node, pixel fill rate, or memory speed, among other ways of measuring performance. It can be a useful metric to consider, but it’s not all-encompassing by itself. This is especially true when it comes to gaming. Not only are there other factors that affect real-world gaming performance on the GPU itself, but gaming systems, whether they’re consoles or PCs, rely on the CPU, memory, and storage to deliver the whole gaming experience. Component bottlenecks can slow down the whole system, and not all aspects of a game rely on each component equally. It also depends on the settings chosen by the user. You can have the most powerful graphics card in the world, with the highest TFLOPS possible, but if you’re playing at 1080P resolution, you won’t be using its full capacity and it won’t perform any better than a GPU with much lower TFLOPS performance. That goes doubly so for when using advanced visual features like Nvidia’s DLSS and ray tracing, which require very specific hardware not related to the central GPU to render. The RT and tensor cores use to make those technologies possible have their own performance metrics which are entirely separate from the GPU’s capabilities.