I see an octogenarian in a MAGA hat, railing against the 21st century. Face it, the imperial system is an anachronism that the rest of the world long ago consigned to the dustbin of history. The plain and simple reason is that, in the metric system, you either divide by 10 or multiply by 10. It’s uncomplicated, efficient and easy to learn. Why is this not self evident?
As for gallons, or fluid ounces, or pints, or quarts, or gills (?), a litre is a litre (or liter) everywhere on the planet, even in the USA and Myanmar, and it has never varied.
(One of the most epic fails was when NASA bounced a spacecraft off the Martian atmosphere and never saw it again. It turned out that NASA had been calculating trajectories in metres per second since 1990 while Lockheed’s idiots were still using furlongs per fortnight.)
Back to the subject of decimal and binary kilo/mega/giga/terabytes, you really need to understand the historical background. I grew up with memory chips and storage devices during the late 1970s and early 1980s. RAM chips were, and still are, accessed via data and address buses. These were almost always binary in nature. The address bus was either 4-bit or 8-bit or 16-bit and so on. The data bus was 1-bit or 4-bit or 8-bit, etc. Therefore the capacities of memory chips were nearly always expressed in powers of 2.
OTOH, hard drives and floppy diskettes were assigned all sorts of capacities which were never usually binary in nature. For example, a 3.5" floppy diskette has two sides, 80 tracks, 18 sectors per track, for a total of 2880 sectors. The sector size is 512 bytes, which is a power of 2. The total capacity is 1474560 bytes which is 1.40625 MiB. So where does the 1.44MB figure come from?
Typical 10MB and 20MB MFM HDDs of the 1980s had 17 sectors per track. Here is a 42MB Miniscribe HDD:
The CHS geometry was 809 cylinders, 6 heads, 17 sectors/track, 512 bytes/sector. The capacity was 42249216 bytes which is 42.24MB or 40.29MiB
Now tell me, how should the manufacturer specify the capacity? In short, memory capacity has historically been binary while storage capacity has been decimal.
Moreover, today we have SSDs which add a new twist to the capacity issue. Being memory devices, the NAND arrays will have a binary capacity, but part of this capacity will be reserved for firmware and overprovisioning. For example, a 1TB SSD will actually have 1TiB of memory, but 9% of it will be set aside for internal use. Creating an SSD with 1TiB of usable capacity would therefore be logistically impossible. No amount of stupid lawyers or stupid judges or stupid consumers can change that.