The author decidedly has expert syndrome -- they deny both the history and rational behind memory units nomenclature. Memory measurements evolved utilizing binary organizational patterns used in computing architectures. While a proud French pedant might agree with the decimal normalization of memory units discussed, it aligns more closely to the metric system, and it may have benefits for laypeople, it fails to account for how memory is partitioned in historic and modern computing.
They are managed by different standards organizations. One doesn't like the other encroaching on its turf. "kilo" has only one official meaning as a base-10 scalar.
I don't think of base 10 being meaningful in binary computers. Indexing 1k needs 10 bits regardless if you wanted 1000 or 1024, and the base 10 leaves some awkward holes.
In my mind base 10 only became relevant when disk drive manufacturers came up with disks with "weird" disk sizes (maybe they needed to reserve some space for internals, or it's just that the disk platters didn't like powers of two) and realised that a base 10 system gave them better looking marketing numbers. Who wants a 2.9TB drive when you can get a 3TB* drive for the same price?
Three binary terabytes i.e. 3 * 2^40 is 3298534883328, or 298534883328 more bytes than 3 decimal terabytes. The latter is 298.5 decimal gigabytes, or 278 binary gigabytes.
Indeed, early hard drives had slightly more than even the binary size --- the famous 10MB IBM disk, for example, had 10653696 bytes, which was 167936 bytes more than 10MB --- more than an entire 160KB floppy's worth of data.
Buy an SSD, and you can get both at the same time!
That is to say, all the (high-end/“gamer”) consumer SSDs that I’ve checked use 10% overprovisioning and achieve that by exposing a given number of binary TB of physical flash (e.g. a “2TB” SSD will have 2×1024⁴ bytes’ worth of flash chips) as the same number of decimal TB of logical addresses (e.g. that same SSD will appear to the OS as 2×1000⁴ bytes of storage space). And this makes sense: you want a round number on your sticker to make the marketing people happy, you aren’t going to make non-binary-sized chips, and 10% overprovisioning is OK-ish (in reality, probably too low, but consumers don’t shop based on the endurance metrics even if they should).
TLC flash actually has a total number of bits that's a multiple of 3, but it and QLC are so unreliable that there's a significant amount of extra bits used for error correction and such.
SSDs haven't been real binary sizes since the early days of SLC flash which didn't need more than basic ECC. (I have an old 16MB USB drive, which actually has a user-accessible capacity of 16,777,216 bytes. The NAND flash itself actually stores 17,301,504 bytes.)
> I don't think of base 10 being meaningful in binary computers.
They communicate via the network, right? And telephony has always been in base 10 bits as opposed to base two eight bit bytes IIUC. So these two schemes have always been in tension.
So at some point the Ki, Mi, etc prefixes were introduced along with b vs B suffixes and that solved the issue 3+ decades ago so why is this on the HN front page?!
A better question might be, why do we privilege the 8 bit byte? Shouldn't KiB officially have a subscript 8 on the end?
What are you talking about? The article literally fully explains the rationale, as well as the history. It's not "denying" anything. Seems entirely reasonable and balanced to me.
Unit names are always lower-case[1] (watt, joule, newton, pascal, hertz), except at the start of a sentence. When referring to the scientists the names are capitalized of course, and the unit symbols are also capitalized (W, J, N, Pa, Hz).
Thus there's no ambiguity. kB is power of 10 and KB is clearly not kelvin bytes therefore it's power of two. Doesn't quite fit the SI worldview but I don't see that as a problem.
They are definitely denying the importance of 2-fold partitioning in computing architectures. VM_PAGE_SIZE is not defined with the value of '10000' for good reason (in many operating systems it is set to '16384').
That's why I said "usually acceptable depending on the context". In spoken language I also don't like the awkward and unusual pronunciation of "kibi". But I'll still prefer to write in KiB, especially if I document something.
Also If you open major Linux distro task managers, you'll be surprised to see that they often show in decimal units when "i" is missing from the prefix. Many utilities often avoid the confusing prefixes "KB", "MB"... and use "KiB", "MiB"...
There is a counterproductive obsession with powers of 10.
Sometimes, other systems just make more sense.
For example, for time, or angles, or bytes. There are properties of certain numbers (or bases) that make everything descending from them easier to deal with.
The ancient Sumerians used multiples of 60, as we continue to do for time and angles (which are related) today. It makes perfect sense. 60 is divisible by 2, 3, 4, 5, and 6, which makes it easy to use in calculations. Even the metric people are not so crazy as to propose replacing these with powers of 10.
Same with pounds, for example. A pound is 16 ounces, which can be divided 4 times without involving any fractions. Try that with metric.
Then there's temperature. Fahrenheit just works more naturally over the human-scale temperature range without involving fractions. Celsius kind of sucks by comparison.
Now that I think about it, I see KiB and kb all the time but I don't know that I've ever encountered Kib or kB in the wild. Maybe I'm in a bubble? Or maybe we should accept that kb is power of 10 but kB is power of two?
Well I guess we already basically have this in practice since Ki can be shortened to K seeing as metric prefixes are always lower case and we clearly aren't talking about kelvin bytes.
And a megabyte is depending on the context precisely 1000x1000=1,000,000 or 1024x1024=1,048,576 bytes*, except when you're talking about the classic 3.5 inch floppy disks, where "1.44 MB" stands for 1440x1024 bytes, or about 1.47 true MB or 1.41 MiB.
* Yeah, I read the article. Regardless of the IEC's noble attempt, in all my years of working with people and computers I've never heard anyone actually pronounce MiB (or write it out in full) as "mebibyte".
Well the 1.44 MB, was called that because it was 1440 KB, twice the capacity of the 720k floppy, and 4x the 360k floppy. It made perfect sense to me at that time.
It may "make sense" but that's actually a false equivalence. The raw disk space for a 3.5" high-density floppy disk for IBM PCs is 512 bytes per sector * 18 sectors per track * 80 tracks per side * 2 sides = 1,474,560 bytes. It is 1.47 MB or 1.40 MiB neither of which is 1440 KB or KiB. The 1440 number comes from Microsoft's FAT12 filesystem. That was the space that's left for files outside the allocation table.
Sectors per track or tracks per side is subject to change. Moreover a different filesystem may have non-linear growth of the MFT/superblock that'll have a different overhead.
All words are made up. They weren’t handed down from a deity, they were made up by humans to communicate ideas to other humans.
“Kilo” can mean what we want in different contexts and it’s really no more or less correct as long as both parties understand and are consistent in their usage to each other.
I find it concerning that kilo can mean both 10^3 and 2^10 depending on context.
And that the context is not if you're speaking about computery stuff, but which program you use has almost certainly lead to avoidable bugs.
That latter part is only true since marketing people decided they knew better about computer related things than computer people.
It's also stupid because it's rare than anyone outside of programming even needs to care exactly how many bytes something else. At the scales that each of kilobyte, megabyte, gigabyte, terabyte etc are used, the smaller values are pretty much insignificant details.
If you ask for a kilogram of rice, then you probably care more about that 1kg of rice is the same as the last 1kg of rice you got, you probably wouldn't even care how many grams that is. Similarly, if you order 1 ton of rice, you do care exactly how many grams it is, or do you just care that this 1 ton is the same as that 1 ton?
This whole stupidity started because hard disk manufacturers wanted to make their drives sound bigger than they actually were. At the time, everybody buying hard disks knew about this deception and just put up with it. We'd buy their 2GB drive and think to ourselves, "OK so we have 1.86 real GB". And that was the end of it.
Can you just imagine if manufacturers started advertising computers as having 34.3GB of RAM? Everybody would know it was nonsense and call it 32GB anyway.
Not as far as I can tell. There's power of 10 bits and power of 2 bytes. I've never seen the inverse of those in an actual real world scenario outside of storage manufacturers gaming the numbers but even then the context is once again perfectly clear.
The "which program you use" confusion was instigated by the idiots insisting that we should have metric kilobytes, megabytes and gigabytes (cheered on by crooked storage manufacturers).
Before all that nonsense, it was crystal clear: a megabyte in storage was unambiguously 1024 x 1024 bytes --- with the exception of crooked mass storage manufacturers.
There was some confusion, to be sure, but the partial success of attempt to redefine the prefixes to their power-of-ten meanings has caused more confusion.
> We agree to meaning to communicate and progress without endless debate and confusion.
We decidedly do not do that. There's a whole term for new terms that arbitrarily get injected or redefined by new people: "slang". I don't understand a lot of the terms teenagers say now, because there's lots of slang that I don't know because I don't use TikTok and I'm thirty-something without kids so I don't hang out with teenagers.
I'm sure it was the same when I was a teenager, and I suspect this has been going on since antiquity.
New terms are made up all the time, but there's plenty of times existing words get redefined. An easy one, I say "cool" all the time, but generally I'm not talking about temperature when I say it. If I said "cool" to refer to something that I like in 1920's America, they would say that's not the correct use of the word.
SI units are useful, but ultimately colloquialisms exist and will always exist. If I say kilobyte and mean 1024 bytes, and if the person on the other end knows that I mean 1024 bytes, that's fine and I don't think it's "nihilistic".
> Yes, and the made up words of kilo and kibi were given specific definitions by the people who made them up
Good for them. People make up their own definitions for words all the time. Some of those people even try to get others to adopt their definition. Very few are ever successful. Because language is about communicating shared meaning. And there is a great deal of cultural inertia behind the kilo = 2^10 definition in computer science and adjacent fields.
> Yes, and the made up words of kilo and kibi were given specific definitions by the people who made them up
Kilo was generally understood to mean one thousand long before it was adopted by a standards committee. I know the French love to try and prescribe the use of language, but in most of the world words just mean what people generally understand them to mean; and that meaning can change.
I don't think that the xkcd is relevant here, because I'm arguing that both parties know what the other is talking about. I haven't implicitly changed the definition because most people assume that kilobyte is 1024 bytes. Yeah, sure, it's "wrong" in some sense, but language is about communicating ideas between two people; if the communication is successful than the word is "correct".
If Bob says "kilobyte" to Alice, and Bob means 5432 bytes, and Alice perceives him to mean 5432 bytes, then in that context, "kilobyte" means 5432 bytes.
I worked with networked attached storage systems at pib scale several years ago and we referred to things in gib/tib because it was significant when referring to the size of systems and we needed to be precise.
That being said, I think the difference between mib and mb is niche for most people
They should be more precise if they are talking about KiB in a context where the difference matters... luckily those contexts are usually written down.
A mile is exactly 1000 paces, or 4000 feet. You may disagree, but consider: the word mile come from latin for "one thousand". Therefore a mile must be 1000 of something, namely paces. I hope you find this argument convincing.
Maybe after society has collapsed and been rebuilt we'll end up with km and cm having a weird ratio to the meter. Same for kg. At least celsius is just about impossible to screw up.
Not to be confused with the 1600 meter or "1 mile" race which is commonly run in US track and field events (i.e. 4 times around a 400 meter track). At least that's within 1% of an actual mile.
I think your comment is supposed to be sarcastic, but I'm not sure what the sarcasmcis about? Yes, a mile is 1000 paces. That is why it's called a mile. It's not an "argument", it's just what a mile is.
I'm sticking with power-of-2 sizes. Invent a new word for decimal, metric units where appropriate. I proposed[0] "kitribytes", "metribytes", "gitribytes", etc. Just because "kilo" has a meaning in one context doesn't mean we're stuck with it in others. It's not as though the ancient Greeks originally meant "kilo" to mean "exactly 1,000". "Giga" just meant "giant". "Tera" is just "monster". SI doesn't have sole ownership for words meaning "much bigger than we can possibly count at a glance".
Donald Knuth himself said[1]:
> The members of those committees deserve credit for raising an important issue, but when I heard their proposal it seemed dead on arrival --- who would voluntarily want to use MiB for a maybe-byte?! So I came up with the suggestion above, and mentioned it on page 94 of my Introduction to MMIX. Now to my astonishment, I learn that the committee proposals have actually become an international standard. Still, I am extremely reluctant to adopt such funny-sounding terms; Jeffrey Harrow says "we're going to have to learn to love (and pronounce)" the new coinages, but he seems to assume that standards are automatically adopted just because they are there.
If Gordon Bell and Gene Amdahl used binary sizes -- and they did -- and Knuth thinks the new terms from the pre-existing units sound funny -- and they do -- then I feel like I'm in good company on this one.
Knuth is not in favour of using kilo/mega/etc with power-of-2 meanings:
> I'm a big fan of binary numbers, but I have to admit that this convention flouts the widely accepted international standards for scientific prefixes.
He also calls it “an important issue” and had written “1000 MB = 1 gigabyte (GB), 1000 GB = 1 terabyte (TB), 1000 TB = 1 petabyte (PB), 1000 PB = 1 exabyte (EB), 1000 EB = 1 zettabyte (ZB), 1000 ZB = 1 yottabyte (YB)” in his MMIX book even before the new binary prefixes became an international standard.
He is merely complaining that the new names for the binary prefixes sound funny (and has his own proposal like “large megabyte” and notation MMB etc), but he's still using the kilo/mega/etc prefixes with decimal meanings.
It's odd though. Metric prefixes are always lower case, so GB isn't valid metric. Further, outside of storage manufacturers attempting to inflate their numbers when does is ever make sense to mix power of ten with 8 bit bytes? Networking is always in bits per second, not bytes.
> Invent a new word for decimal, metric units where appropriate.
No, they already did the opposite with KiB, MiB.
Because most metric decimal units are used for non-computing things. Kilometers, etc. Are you seriously proposing that kilometers should be renamed kitrimeters because you think computing prefixes should take priority over every other domain of science and life?
Do you often convert between inherently binary units like RAM sizes and more appropriately decimal units like distances?
It would be annoying of one frequently found themselves calculating gigabytes per hectare. I don't think I've ever done that. The closest I've seen is measure magnetic tape density where you get weird units like "characters per inch", where neither "character" nor "inch" are the common units for their respective metrics.
It means that is fine for Kilo to mean 1024 in the context of computers and 1000 in the context of distances, because you're never going to be in a situation where that is ambiguous.
Except it's not because it's constantly ambiguous in computing.
E.g. Macs measure file sizes in powers of 10 and call them KB, MB, GB. Windows measures file sizes in powers of 2 and calls them KB, MB, GB instead of KiB, MiB, GiB. Advertised hard drives come in powers of 10. Advertised memory chips come in powers of 2.
When you've got a large amount of data or are allocating an amount of space, are you measuring its size in memory or on disk? On a Mac or on Windows?
And that is because some people didn't like that a kilobyte was 1024 bytes instead of 1000, so they started using 1000 instead, and then that created confusion, so then they made up new term "kibibyte" that used 1024, and now it's all a mess.
And in most cases, using 1024 is more convenient because the sizes of page sizes, disk sectors, etc. are powers of 2.
> Macs measure file sizes in powers of 10 and call them KB, MB, GB.
That doesn't conform to SI. It should be written as kB mB gB. Ambiguity will only arise when speaking.
> Advertised hard drives come in powers of 10.
Mass storage (kB) has its own context at this point, distinct from networking (kb/s) and general computing (KB).
> When you've got a large amount of data or are allocating an amount of space, ...
You aren't speaking but are rather working in writing. kb, kB, Kb, and KB refer to four different unit bit counts and there is absolutely zero ambiguity. The only question that might arise (depending on who you ask) is how to properly verbalize them.
I had a computer architecture prof (a reasonably accomplished one, too) who thought that all CS units should be binary, e.g. Gigabit Ethernet should be 931Mbit/s, not 1000MBit/s.
I disagreed strongly - I think X-per-second should be decimal, to correspond to Hertz. But for quantity, binary seems better. (modern CS papers tend to use MiB, GiB etc. as abbreviations for the binary units)
Fun fact - for a long time consumer SSDs had roughly 7.37% over-provisioning, because that's what you get when you put X GB (binary) of raw flash into a box, and advertise it as X GB (decimal) of usable storage. (probably a bit less, as a few blocks of the X binary GB of flash would probably be DOA) With TLC, QLC, and SLC-mode caching in modern drives the numbers aren't as simple anymore, though.
NAND flash has overprovisioning even on a per-die basis, eg. Micron's 256Gbit first-generation 3D NAND had 548 blocks per plane instead of 512, and the pages were 16384+2208 bytes. That left space both for defects and ECC while still being able to provide at least the nominal capacity (in power of two units) with good yield, but meant the true number of memory cells was more than 20% higher than implied by the nominal capacity.
The decimal-vs-binary discrepancy is used more as slack space to cope with the inconvenience of having to erase whole 16MB blocks at a time while allowing the host to send write commands as small as 512 bytes. Given the limited number of program/erase cycles that any flash memory cell can withstand, and the enormous performance penalty that would result from doing 16MB read-modify-write cycles for any smaller host writes, you need way more spare area than just a small multiple of the erase block size. A small portion of the spare area is also necessary to store the logical to physical address mappings, typically on the order of 1GB per 1TB when tracking allocations at 4kB granularity.
There's a good reason that gigabit ethernet is 1000MBit/s and that's because it was defined in decimal from the start. We had 1MBit/s, then 10MBit/s, then 100MBit/s then 1000MBit/s and now 10Gbit/s.
Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.
Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.
To add further confusion, 57600 was actually a serial port speed, from the computer to the modem, which was higher than the maximum physical line (modem) speed. Many people ran higher serial port speeds to take advantage of compression (115200 was common.)
56000 BPS was the bitrate you could get out of a DS0 channel, which is the digital version of a normal phone line. A DS0 is actually 64000 BPS, but 1 bit out of 8 is "robbed" for overhead/signalling. An analog phone lined got sampled to 56000 BPS, but lines were very noisy, which was fine for voice, but not data.
7 bits per sample * 8000 samples per second = 56000, not 57600.
That was theoretical maximum bandwidth! The FCC also capped modems at 53K or something, so you couldn't even get 56000, not even on a good day.
This has nothing to do with the 1024, it has todo with the 1200 and the multiples of it and the 14k and 28k modems where everyone just cut off the last some hundred bytes because you never reached that speed anyway.
> that's because it was defined in decimal from the start
I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.
Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.
RAM had binary sizing for perfectly practical reasons. Nothing else did (until SSDs inherited RAM's architecture).
We apply it to all the wrong things mostly because the first home computers had nothing but RAM, so binary sizing was the only explanation that was ever needed. And 50 years later we're sticking to that story.
RAM having binary sizing is a perfectly good reason for hard drives having binary sized sectors (more efficient swap, memory maps, etc), which in turn justifies all of hard disks being sized in binary.
Literally every number in a computer is base-2, not just RAM addressing. Everything is ulimately bits, pins, and wires. The physical and logical interface between your oddly sized disk and your computer? Also some base-2.
Not everything is made from wires and transistors. And that's why these things are usually not measured in powers of 2:
- magnetic media
- optical media
- radio waves
- time
There's good reasons for having power-of-2 sectors (they need to get loaded into RAM), but there's really no compelling reason to have a power-of-2 number of sectors. If you can fit 397 sectors, only putting in 256 is wasteful.
Since everything ultimately ends up inside a base-2 computer across base-2 bus that even if these media aren't subject to the same considerations it still makes sense to measure them that way.
The choice would be effectively arbitrary, the number of actual bits or bytes is the same regardless of the multiplier that you use. But since it's for a computer, it makes sense to use units that are comparable (e.g. RAM and HD).
Buses and networking fit best with base 10 bits (not bytes) per second for reasons that are hopefully obvious. But I agree with you that everything else naturally lends itself to base 2.
Nope. The first home computers like the C64 had RAM and sectors on disc, which in case of the C64 means 256 bytes. And there it is again, the smaller base of 1024.
Just later, some marketing assholes thought they could better sell their hard drives when they lie about the size and weasel out of legal issues with redefining the units.
It makes it inconvenient to do things like estimate how long it will take to transfer a 10GiB file. Both because of the the difference between G and Gi, and because one is in bytes and the other is in bits.
There are probably cases where corresponding to Hz, is useful, but for most users I think 119MiB/s is more useful than 1Gbit/s.
Wirespeeds and bitrate and baud and all that stuff is vastly confusing when you start looking into it - because it's hard to even define what a "bit on the wire" is when everything has to be encoded in such a way that it can be decoded (specialized protocols can go FASTER than normal ones on the same wire and the same mechanism if they can guarantee certain things - like never having four zero bits in a row).
An even bigger problem is that networks are measured in bits while RAM and storage are in bytes. I'm sure this leads to plenty of confusion when people see a 120 meg download on their 1 gig network.
(The old excuse was that networks are serial but they haven't been serial for decades.)
musicians use numbering systems that are actually far more confused than anything discussed here. how many notes in an OCTave? "do re mi fa so la ti do" is eight, but that last do is part of the next octave, so an OCTave is 7 notes. (if we count transitions, same thing, starting at the first zero do, re is 1, ... again 7.
the same and even more confusion is engendered when talking about "fifths" etc.
The 7 note scale you suggest (do re mi fa so la ti do) is comprised of different intervals (2 2 1 2 2 2 1) in the 12-fold equal tempered scale. There are infinite ways of exploring an octave in music, but unfortunately listener demand for such exploration is near infinitesimal.
You can blame the Romans for that, as they practiced inclusive counting. Their market days occurring once every 8 days were called nundinae, because the next market day was the ninth day from the previous one. (And by the same logic, Jesus rose from the dead on the third day.)
Whenever this discussion comes up I liked to point out that even in the computer industry, prefixes like kilo/mega/etc more often mean a power of 10 than a power of 2:
They almost always mean power of 10, unless you're discussing RAM, RAM addressing, or RAM pages. (or flash, which has inherited most of the same for most of the same reasons)
Nice page, and nice link to Colin Percival's page too! Let me toss you one example: CDs are marketed in mebibytes. A "650 MB" burnable CD is actually 650 MiB ≈ 682 MB, and likewise for "700 MB" being actually 700 MiB ≈ 734 MB. DVD and BD do use metric prefixes correctly, like you pointed out. Back in the day, I archived my data on CD/DVD/BD, and I planned out my disc burns to have only about 1 to 10 MB of wasted space, so I had to be very aware of the true definition and exactly how much capacity was available for me to use.
> The only few places where binary prefixes are used are to refer to RAM capacity and file sizes, whereas decimal prefixes apply to all other areas and all units (not "only bitrates"): storage capacity, clock frequency, stream bandwidth, baud, pixel numbers, data throughput, processing power
Storage capacity also uses binary prefixes. The distinction here isn't that file sizes are reported in binary numbers and storage capacity is reported in decimal numbers. It's that software uses binary numbers and hard drive manufacturers use decimal numbers. You don't see df reporting files in binary units and capacities in decimal units.
Of that large list of measurements, only bandwidth is measured in bytes, making the argument mostly an exercise in sophistry. You can't convince anyone that KB means 1000 bytes by arguing that kHz means 1000 Hz.
The author doesn’t actually answer their question, unless I missed something?
They go on to make a few more observations, and say finally only that the current different definitions are sometimes confusing, to non experts.
I don’t see much of an argument here for changing anything. Some non experts experience minor confusion about two things that are different, did I miss something bigger in this?
"kilo" means what people take it to mean in any particular context. In computing, it is overwhelmingly power of two even today, and if you don't use it in this manner you have to clarify to be understood properly.
Sure. I assume the ship has sailed already and I certainly won't die on that hill to change the meaning again, but still the word "kilo" literally means 1000 and it would have been more consistent to use it like this and for 1024 use a (slightly) different word.
In this context it's a unit prefix, not a standalone word. SI specifies a widely adopted system that defines and then uses a set of prefixes in a consistent manner. But we aren't forced to use SI everywhere without reason.
> Explorer is just following existing practice. Everybody (to within experimental error) refers to 1024 bytes as a kilobyte, not a kibibyte. If Explorer were to switch to the term kibibyte, it would merely be showing users information in a form they cannot understand, and for what purpose? So you can feel superior because you know what that term means and other people don’t.
Honestly, when working with computers, KiB, MiB, GiB, etc. just makes more sense usually. It is easier to reason about address space and page sizes are often delineated in 4KiB chunks. It does come off like "inside baseball" a little but there are practical reasons for it.
If you really want to come at it from an information theory perspective, even the "byte" is rather arbitrary - the only thing that matters is the number of bits.
This ambiguity is documented at least back to 1984, by IBM, the pre-eminent computer company of the time.
In 1972 IBM started selling the IBM 3333 magnetic disk drive. This product catalog [0] from 1979 shows them marketing the corresponding disks as "100 million bytes" or "200 million bytes" (3336 mdl 1 and 3336 mdl 11, respectively). By 1984, those same disks were marketed in the "IBM Input/Output Device Summary"[1] (which was intended for a customer audience) as "100MB" and "200MB"
Edit: The below is wrong. Older experience has corrected me - there has always been ambiguity (perhaps bifurcated between CPU/OS and storage domains). "And that with such great confidence!", indeed.
-------
The article presents wishful thinking. The wish is for "kilobyte" to have one meaning. For the majority of its existence, it had only one meaning - 1024 bytes. Now it has an ambiguous meaning. People wish for an unambiguous term for 1000 bits, however that word does not exist. People also might wish that others use kibibyte any time they reference 1024 bytes, but that is also wishful thinking.
The author's wishful thinking is falsely presented as fact.
I think kilobyte was the wrong word to ever use for 1024 bytes, and I'd love to go back in time to tell computer scientists that they needed to invent a new prefix to mean "1,024" / "2^10" of something, which kilo- never meant before kilobit / kilobyte were invented. Kibi- is fine, the phonetics sound slightly silly to native English speakers, but the 'bi' indicates binary and I think that's reasonable.
I'm just not going to fool myself with wishful thinking. If, in arrogance or self-righteousness, one simply assumes that every time they see "kilobyte" it means 1,000 bytes - then they will make many, many failures. We will always have to take care to verify whether "kilobyte" means 1,000 or 1,024 bytes before implementing something which relies on that for correctness.
You've got it exactly the wrong way around. And that with such great confidence!
There was always a confusion about whether a kilobyte was 1000 or 1024 bytes. Early diskettes always used 1000, only when the 8 bit home computer era started was the 1024 convention firmly established.
Before that it made no sense to talk about kilo as 1024. Earlier computers measured space in records and words, and I guess you can see how in 1960, no one would use kilo to mean 1024 for a 13 bit computer with 40 byte records. A kiloword was, naturally, 1000 words, so why would a kilobyte be 1024?
1024 bearing near ubiquitous was only the case in the 90s or so - except for drive manufacturing and signal processing. Binary prefixes didn't invent the confusion, they were a partial solution. As you point out, while it's possible to clearly indicate binary prefixes, we have no unambiguous notation for decimal bytes.
Even worse, the 3.5" HD floppy disk format used a confusing combination of the two. Its true capacity (when formatted as FAT12) is 1,474,560 bytes. Divide that by 1024 and you get 1440KB; divide that by 1000 and you get the oft-quoted (and often printed on the disk itself) "1.44MB", which is inaccurate no matter how you look at it.
I'm not seeing evidence for a 1970s 1000-byte kilobyte. Wikipedia's floppy disk page mentions the IBM Diskette 1 at 242944 bytes (a multiple of 256), and then 5¼-inch disks at 368640 bytes and 1228800 bytes, both multiples of 1024. These are sector sizes. Nobody had a 1000-byte sector, I'll assert.
Firstly, I think you may have replied to the wrong person. I wasn't the one who mentioned the early diskettes point, I was just quoting it.
But that said, we aren't talking about sector sizes. Of course storage mediums are always going to use sector sizes of powers of two. What's being talked about here is the confusion in how to refer to the storage medium's total capacity.
The wiki page agrees with parent, "The double-sided, high-density 1.44 MB (actually 1440 KiB = 1.41 MiB or 1.47 MB) disk drive, which would become the most popular, first shipped in 1986"
it's, way older in than the 1990's! In computering, "K" always meant 1024 at least from 1970's.
Example: in 1972, DEC PDP 11/40 handbook [0] said on first page: "16-bit word (two 8-bit bytes), direct addressing of 32K 16-bit words or 64K 8-bit bytes (K = 1024)". Same with Intel - in 1977 [1], they proudly said "Static 1K RAMs" on the first page.
It was exactly this - and nobody cared until the disks (the only thing that used decimal K) started getting so big that it was noticeable. With a 64K system you're talking 1,536 "extra" bytes of memory - or 1,536 bytes of memory lost when transferring to disk.
But once hard drives started hitting about a gigabyte was when everyone started noticing and howling.
It was earlier than the 90s, and came with popular 8-bit CPUs in the 80s. The Z-80 microprocessor could address 64kb (which was 65,536 bytes) on its 16-bit address bus.
Similarly, the 4104 chip was a "4kb x 1 bit" RAM chip and stored 4096 bits. You'd see this in the whole 41xx series, and beyond.
> The Z-80 microprocessor could address 64kb (which was 65,536 bytes) on its 16-bit address bus.
I was going to say that what it could address and what they called what it could address is an important distinction, but found this fun ad from 1976[1].
"16K Bytes of RAM Memory, expandable to 60K Bytes", "4K Bytes of ROM/RAM Monitor software", seems pretty unambiguous that you're correct.
Interestingly wikipedia at least implies the IBM System 360 popularized the base-2 prefixes[2], citing their 1964 documentation, but I can't find any use of it in there for the main core storage docs they cite[3]. Amusingly the only use of "kb" I can find in the pdf is for data rate off magnetic tape, which is explicitly defined as "kb = thousands of bytes per second", and the only reference to "kilo-" is for "kilobaud", which would have again been base-10. If we give them the benefit of the doubt on this, presumably it was from later System 360 publications where they would have had enough storage to need prefixes to describe it.
Still the advertisement is filled with details like the number of chips, the number of pins, etc. If you're dealing with chips and pins, it's always going to base-2.
only when the 8 bit home computer era started was the 1024 convention firmly established.
That's the microcomputer era that has defined the vast majority of our relationship with computers.
IMO, having lived through this era, the only people pushing 1,000 byte kilobytes were storage manufacturers, because it allows them to bump their numbers up.
Good lord, arrogance and self-righteousness? You're blowing the article out of proportion. They don't say anything non-factual or unreasonable - why inject hostility where none is called for?
In fact, they practically say the same exact thing you have said: In a nutshell, base-10 prefixes were used for base-2 numbers, and now it's hard to undo that standard in practice. They didn't say anything about making assumptions. The only difference is that that the author wants to keep trying, and you don't think it's possible? Which is perfectly fine. It's just not as dramatic as your tone implies.
I'm not calling the author arrogant or self-righteous. I stated that if a hypothetical person simply assumes that every "kilobyte" they come across is 1,000 bytes, that they are doomed to frequent failures. I implied that for someone to hypothetically adhere to that internal dogma even in the face of impending failures, the primary reasons would be either arrogance or self-righteousness.
I don't read any drama or hostility, just a discussion about names. OP says that kilobyte means one thing, the commenter says that it means two things and just saying it doesn't can't make that true. I agree, after all, we don't get to choose the names for things that we would like.
> The article presents wishful thinking. The wish is for "kilobyte" to have one meaning. For the majority of its existence, it had only one meaning - 1024 bytes. Now it has an ambiguous meaning. People wish for an unambiguous term for 1000 bits, however that word does not exist. People also might wish that others use kibibyte any time they reference 1024 bytes, but that is also wishful thinking.
> The author's wishful thinking is falsely presented as fact.
There's good reason why the meanings of SI prefixes aren't set by convention or by common usage or by immemorial tradition, but by the SI. We had several thousand years of setting weights and measures by local and trade tradition and it was a nightmare, which is how we ended up with the SI. It's not a good show for computing to come along and immediately recreate the long and short ton.
> setting weights and measures by local and trade tradition and it was a nightmare
Adding to your point, it is human nature to create industry- or context-specific units and refuse to play with others.
In the non-metric world, I see examples like: Paper publishing uses points (1/72 inch), metal machinists use thousands of an inch, woodworkers use feet and inches and binary fractions, land surveyors use decimal feet (unusual!), waist circumference is in inches, body height is in feet and inches, but you buy fabric by the yard, airplane altitudes are in hundreds to tens of thousands of feet instead of decimal miles. Crude oil is traded in barrels but gasoline is dispensed in gallons. Everyone thinks their usage of units and numbers is intuitive and optimal, and everyone refuses to change.
In the metric(ish) world, I still see many tensions. The micron is a common alternate name for the micrometre, yet why don't we have a millin or nanon or picon? The solution is to eliminate the micron. I've seen the angstrom (0.1 nm) in spectroscopy and in the discussion of CPU transistor sizes, yet it diverts attention away from the picometre. The bar (100 kPa) is popular in talking about things like tire pressure because it's nearly 1 atmosphere. The mmHg is a unit of pressure that sounds metric but is not; the correct unit is pascal. No one in astronomy uses mega/giga/tera/peta/etc.-metres; instead they use AU and parsec and (thousand, million, billion) light-years. Particle physics use eV/keV/MeV instead of some units around the picojoule.
Having a grab bag of units and domains that don't talk to each other is indeed the natural state of things. To put your foot down and say no, your industry does not get its own special snowflake unit, stop that nonsense and use the standardized unit - that takes real effort to achieve.
The SI should just have set kilobyte to 1024 in acquiescence to the established standard, instead of being defensive about keeping a strict meaning of the prefix.
It goes back way further than that. The first IBM harddrive was the IBM 350 for the IBM 305 RAMDAC. It was 5 million characters. Not bytes, bytes weren't "a thing" yet. 5,000,000 characters. The very first harddrive was base-10.
Here's my theory. In the beginning, everything was base10. Because humans.
Binary addressing made sense for RAM. Especially since it makes decoding address lines into chip selects (or slabs of core, or whatever) a piece of cake, having chips be a round number in binary made life easier for everyone.
Then early DOS systems (CP/M comes to mind particularly) mapped disk sectors to RAM regions, so to enable this shortcut, disk sectors became RAM-shaped. The 512-byte sector was born. File sizes can be written in bytes, but what actually matters is how many sectors they take up. So file sizing inherited this shortcut.
But these shortcuts never affected "real computers", only the hamstrung crap people were running at home.
So today we have multiple ecosystems. Some born out of real computers, some with a heavy DOS inheritance. Some of us were taught DOS's limitations as truth, and some of us weren't.
However it doesn't seem to be divided into sectors at all, more like each track is like a loop of magnetic tape. In that context it makes a bit more sense to use decimal units, measuring in bits per second like for serial comms.
Or maybe there were some extra characters used for ECC? 5 million / 100 / 100 = 500 characters per track, leaves 72 bits over for that purpose if the actual size was 512.
First floppy disks - also from IBM - had 128-byte sectors. IIRC, it was chosen because it was the smallest power of two that could store an 80-column line of text (made standard by IBM punched cards).
Disk controllers need to know how many bytes to read for each sector, and the easiest way to do this is by detecting overflow of an n-bit counter. Comparing with 80 or 100 would take more circuitry.
Almost all computers have used power-of-2 sized sectors. The alternative would involve wasted bits (e.g. you can't store as much information in 256 1000-byte units as 256 1024-byte units, so you lose address space) or have to write multiplies and divides and modulos in filesystem code running on machines that don't have opcodes for any of those.
You can get away with those on machines with 64 bit address spaces and TFLOPs of math capacity. You can't on anything older or smaller.
And networking - we've almost always used standard SI prefixes for, e.g., bandwidth. 1 gigabit per second == 1 * 10^9.
Which makes it really @#ing annoying when you have things like "I want to transmit 8 gigabytes (meaning gibibytes, 2*30) over a 1 gigabit/s link, how long will it take?". Welcome to every networking class in the 90s.
We should continue moving towards a world where 2*k prefixes have separate names and we use SI prefixes only for their precise base-10 meanings. The past is polluted but we hopefully have hundreds of years ahead of us to do things better.
Which doesn't make it more correct, of course, even through I strongly believe believe that it is (where appropriate for things like memory sizes). Just saying, it goes much further back than 1984.
That is a prescriptivist way of thinking about language, which is useful if you enjoy feeling righteous about correctness, but not so helpful for understanding how communication actually works. In reality-reality, "kilobyte" may mean either "1000 bytes" or "1024 bytes", depending on who is saying it, whom they are saying it to, and what they are saying it about.
You are free to intend only one meaning in your own communication, but you may sometimes find yourself being misunderstood: that, too, is reality.
It's not even really prescriptivist thinking… "Kilobyte" to mean both 1,000 B & 1,024 B is well-established usage, particularly dependent on context (with the context mostly being HDD manufacturers who want to inflate their drive sizes, and … the abomination that is the 1.44 MB diskette…). But a word can be dependent on context, even in prescriptivist settings.
E.g., M-W lists both, with even the 1,024 B definition being listed first. Wiktionary lists the 1,024 B definition, though it is tagged as "informal".
As a prescriptivist myself I would love if the world could standardize on kilo = 1000, kibi = 1024, but that'll likely take some time … and the introduction of the word to the wider public, who I do not think is generally aware of the binary prefixes, and some large companies deciding to use the term, which they likely won't do, since companies are apt to always trade for low-grade perpetual confusion over some short-term confusion during the switch.
Does anyone, other than HDD manufacturers who want to inflate their drive sizes, actually want a 1000-based kilobyte? What would such a unit be useful for? I suspect that a world which standardized on kibi = 1024 would be a world which abandoned the word "kilobyte" altogether.
> with the context mostly being HDD manufacturers who want to inflate their drive sizes
This is a myth. The first IBM harddrive was 5,000,000 characters in 1956 - before bytes were even common usage. Drives have always been base10, it's not a conspiracy.
Drives are base10, lines are base10, clocks are base10, pretty much everything but RAM is base10. Base2 is the exception, not the rule.
How can there be both a "usual meaning" and a "correct meaning" when you assert that there is only one meaning and "There's no possible discussion over this fact."
You can say that one meaning is more correct than the other, but that doesn't vanish the other meaning from existence.
When precision is required, you either use kibibytes or define your kilobytes explicitly. Otherwise there is a real risk that the other party does not share your understanding of what a kilobyte should mean in that context. Then the numbers you use have at most one significant figure.
That's funny. If I used the "correct" meaning when precision was required then I'd be wrong every time I need to use it. In computers, bytes are almost always measured in base-2 increments.
When dealing with microcontrollers and datasheets and talking to other designers, yes precision is required, and, e.g. 8KB means, unequivocally and unambiguously, 8192 bytes.
I kid good-naturedly. I'm always horrified at what autocorrect has done to my words after it's too late to edit or un-send them. I swear I write words goodly, for realtime!
The line between "literal" and "colloquial" becomes blurred when a word consisting of strongly-defined parts ("kilo") gets used in official, standardized contexts with a different meaning.
In fact, this is the only case I can think of where that has ever happened.
"colloquial" has no place in official contexts. I'll happily talk about kB and MB without considering the small difference between 1000 and 1024, but on a contract "kilo" will unequivocally mean 1000, unless explicitely defined as 1024 for the sake of that document.
<joke> How to tell a software engineer from a real one? A real engineer thinks that 1 kilobyte is 1000 bytes while software engineer believes that there are 1024 meters in a kilometer :-) </joke>
I'm suprised they didn't mention kibibyte. (Edit: they did) There are plenty of applications where power-of-2 alignment are useful or necessary. Not addressing that and just chastising everyone for using units wrong isn't particularly helpful. I guess we can just all switch to kibibytes, except the HDD manufacturers.
We can, but we won't. At least not any time soon. For the foreseeable future, kilobyte will remain an ambiguous term, and kibibyte will very often not be used when someone is referring to 1024 bytes.
It honestly sounds like how a diaper-wearing baby would mispronounce kilobyte.
"I will not sacrifice my dignity. We've made too many compromises already; too many retreats. They invade our space and we fall back. They assimilate entire worlds with awkward pronunciations. Not again. The line must be drawn here! This far, no further! And I will make them pay for what they've done to the kilobyte!"
The author doesn't go far enough into the problems with trying to convert information theory to SI Units.
SI units are attempting to fix standard measurements with perceived constants in nature. A meter(Distance) is the distance light travels in a vacuum, back and forth, within a certain amount of ossilations of a cesium atom(Time). This doesn't mean we tweak the meter to conform to observational results as we'd all be happier if light really was 300 000KM/s instead of ~299 792km/s.
Then there's the problem of not mixing different measurement units. SI was designed to conform all measurements to the same base 10 exponents (cm, m, km versus feet inches and yards) But the authors attempt to resolve this matter doesn't even conform to standardised SI units as we would expect them to.
What is a byte? Well, 8 bits, sometimes.
What is a kilobit? 1000 Bits
What is a kilobyte? 1000 Bytes, or 1024 Bytes.
Now we've already mixed units based on what a bit or a byte even is and the addition of the 8 multiplier in addition to the exponent of 1000 or 1024.
And if you think, hey, at least the bit is the least divisible unit of information, That's not even correct. If there Should* be a reformalisation of information units, you would agree that the amount of "0"'s is the least divisible unit of information. A kilo of zero's, would be 1000. A 'byte' would be defined as containing up to 256 zero's. A Megazero would contain up to a million zero's.
It wouldn't make any intuitive sense for anyone to count 0's, which would automatically convert your information back to base 10, but it does prove that the most sensible unit of information is already what we've had before, that is, you're not mixing bytes (powers of 2) with SI-defined units of 1000
I can go along with that, mostly. When you say "octet", some old-timer with an IBM 650 can't go whining that kids these days can't even read his 7-bit emails.
“A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time.”
"byte" doesn't even remotely resemble any decimal prefix, so it's okay. The problem is that prefixes "kilo", "mega", etc. are supposed to be decimal prefixes, but are used as binary. And what's worse, they aren't used consistently, sometimes they really mean decimal magnitudes, sometimes they don't.
Ah, if only I had a dollar for every time I've had to point someone to the a tool like the following when trying to explain the difference between how much "bandwidth" their server has per month (an IEC unit) vs how fast the server connection is (a SI unit): https://null.53bits.co.uk/uploads/programming/javascript/dat...
I like how the GNU coreutils seem to have done. They use real, 1024-byte kilobytes by default, but print only the abbreviation of the prefix so it's just 10K or 200M and people can pretend it stands for some other silly word if they want.
You can use `--si` for fake, 1000-byte kilobytes - trying it it seems weird that these are reported with a lowercase 'k' but 'M' and so on remain uppercase.
. . . it seems weird that these are reported with a lowercase 'k' but 'M' and so on remain uppercase.
For SI units, the abbreviations are defined, so a lowercase k for kilo and uppercase M for mega is correct. Lower case m is milli, c is centi, d is deci. Uppercase G is giga, T is tera and so on.
Not true. Several SI prefixes already overlap with units. m is both metre and milli-. T is tesla and tera-. c is a prefix of candela (cd) but also centi-. (G is gauss (cgs unit, not mks/SI) and giga-.)
kibi can be apocryphally reinterpreted as being from kibishii, Japanese for stricct. E.g. kibishii sensei: strict teacher.
People who say things like kibibyte usually have no sense of humor, and no tolerance for inconsistencies.
ketchi means stingy in Japanese (careful with that word because it is informal and negative). I propose we rename kibibyte to ketchibaito. ketchibaito could also take on a double meaning as denoting badly paid part-time work.
(One word having two meanings: don't that just make the kibibyte people's puny heads explode ...)
I automatically assume that people that use KB=1000B want to sell me something (and provide less than promised), so should be aggressively ignored or removed from vicinities
KB is 1024 bytes, and don't you dare try stealing those 24 bytes from me
It is good that the old ways have not been forgotten. We used to argue about tabs vs. spaces, GPL vs. BSD, Linux vs. BSD, FreeBSD vs. NetBSD, BSD 2 clause vs BSD 3 clause. It's important to complain about things pointlessly. Builds character.
Anyway, here's my contribution to help make everything worse. I think we should use Kylobyte, etc. when we don't care whether it's 1000 or 1024. KyB. See! Works great.
The meaning of kilo, mega, giga, tera, etc. are unambiguous: SI prefixes defined as powers of 10, not 2. 1 TB is 10*12 bytes, not 2*40 bytes.
The misuse of those prefixes as powers of 1024, while useful as shorthand for computer memory where binary addressing means, is still exactly that: a misuse of SI prefixes.
There's now a separate set of base-2 prefixes to solve this, and people need to update their language accordingly.
Just because an official body gives a single definition doesn't mean it's unambiguous. Real communication isn't bound by official bodies. When I say my computer has 16GB of RAM, that does not mean exactly 16 billion bytes.
I need to update my language accordingly? No thanks. I'll keep saying what I say and nothing will happen.
For all the people commenting as if the meaning of "kilo" was open to discussion... you are all from the United States of America, and you call your country "America", right?
It's too late. Powers-of-two won. I'm the sort of person who uses "whom" in English, but even I acknowledge that using "KB" to mean 1,000, not 1,024, can only breed confusion. The purpose of language is to communicate. I'm all for pedantry when it's compatible with clarity, but we can't reconcile the two goals here.
No it didn't, look at your flash/hard drive labels. Also, there has been confusion since the beginning, and the core cause of confusion is refusing to use the common meaning of K, so insisting on that is just perpetuating said confusion
Those silly words only come up in discussions like this. I have never heard them uttered in real life. I don't think my experience is bizarre here - actual usage is what matters in my book.
To be honest, I think the power-ten SI people might have won the war against the power-two people if they'd just chosen a prefix that sounded slightly less ridiculous than "kibibyte".
What the hell is a "kibibyte"? Sounds like a brand of dog food.
I genuinely believe you're right. It comes across like "the people who are right can use the disputed word, and the people who are wrong can use this infantile one".
I don't know what the better alternative would have been, but this certainly wasn't it.
1. defined traditional suffixes and abbreviations to mean powers of two, not ten, aligning with most existing usages, but...
2. deprecated their use, especially in formal settings...
3. defined new spelled-out vocabulary for both pow10 and pow2 units, e.g. in English "two megabytes" becomes "two binary megabytes" or "two decimal megabytes", and...
4. defined new unambiguous abbreviations for both decimal and binary units, e.g. "5MB" (traditional) becomes "5bMB" (simplified, binary) or "5dMB" (simplified, decimal)
This way, most people most of the time could keep using the traditional units and be understood just fine, but in formal contexts in which precision is paramount, you'd have a standard way of spelling out exactly what you meant.
I'd have gone one step further too and stipulate that truth in advertising would require storage makers to use "5dMB" or "5 decimal megabytes" or whatever in advertising and specifications if that's what they meant. No cheating using traditional units.
(We could also split bits versus bytes using similar principles, e.g. "bi" vs "by".)
I mean consider UK, which still uses pounds, stone, and miles. In contexts where you'd use those units, writing "10KB" or "one megabyte" would be fine too.
The entire reason "storage vendors prefer" 1000-based kilobytes is so that they could misrepresent and over-market their storage capacities, getting that 24-bytes per-kb of expectation-vs-reality profit.
It's the same reason—for pure marketing purposes—that screens are measured diagonally.
Not sure about that, SSDs historically have followed base-2 sizes (think of it as a legacy from their memory-based origins). What does happen in SSDs is that you have overprovisioned models that hide a few % of their total size, so instead of a 128GB SSD you get a 120GB one, with 8GB "hidden" from you that the SSD uses to handle wear leveling and garbage collection algorithms to keep it performing nicely for a longer period of time.
More recently you'd have, say, a 512GB SSD with 512GiB of flash so for usable space they're using the same base 10 units as hard disks. And yes, the difference in units happens to be enough overprovisioning for adequate performance.
Sounds like an urban legend. How likely is it that the optimal amount over-provisioning just so happens to match the gap between power-ten and power-two size conventions?
It doesn't, there's no singular optimal amount of over-provisioning. And that would make no sense, you'd have 28% over-provisioning for a 100/128GB drive, vs 6% over-provisioning for a 500/512GB drive, vs. 1.2% over-provisioning for a 1000/1024GB drive.
It's easy to find some that are marketed as 500GB and have 500x10^9 bytes [0]. But all the NVMe's that I can find that are marketed as 512GB have 512x10^9 bytes[1], neither 500x10^9 bytes nor 2^39 bytes. I cannot find any that are labeled "1TB" and actually have 1 Tebibyte. Even "960GB" enterprise SSD's are measured in base-10 gigabytes[2].
Looking around their website, they appear to be an enthusiastic novice. I looked around because isn't a hardware architecture course part of any first year syllabus? The author clearly hasn't a clue about hardware, how memory is implemented.
I remember when they invented kibibytes and mibibytes and shaking my head and being like they have forever destroyed the meaning of words and things will be off by 2% forever. And is has been.
Metric prefixing should only be used with the unit bit. There is no confusion there. I mean, if you would equate a bit with a certain voltage threshold, you could even argue about fractional bits.
Approximating metric prefixing with kibi, Mibi, Gibi... is confusing because it doesn't make sense semantically. There is nothing base-10-ish about it.
Why don’t kilobyte continue to mean 1024 and introduce kilodebyte to mean 1000. Byte, to me implies a binary number system, and if you want to introduce a new nomenclature to reduce confusion, give the new one a new name and let the older of more prevalent one in its domain keep the old one…
Because kilo- already has a meaning. And both usages of kilobyte were (and are) in use. If we are going to fix the problem, we might as well fix it right.
Which universe do you hail from? Because nobody except pedants have relented to this demand from non-computer scientists to conform to a standardization that has nothing to do with them or the work they do.
Agreed. For the naysayers out there, consider these problems:
* You have 1 "MB" of RAM on a 1 MHz system bus which can transfer 1 byte per clock cycle. How many seconds does it take to read the entire memory?
* You have 128 "GB" of RAM and you have an empty 128 GB SSD. Can you successfully hibernate the computer system by storing all of RAM on the SSD?
* My camera shoots 6000×4000 pixels = exactly 24 megapixels. If you assume RGB24 color (3 bytes per pixel), how many MB of RAM or disk space does it take to store one raw bitmap image matrix without headers?
The SI definitions are correct: kilo- always means a thousand, mega- always means a million, et cetera. The computer industry abused these definitions because 1000 is close to 1024, creating endless confusion. It is a idiotic act of self-harm when one "megahertz" of clock speed is not the same mega- as one "megabyte" of RAM. IEC 60027 prefixes are correct: there is no ambiguity when kibi- (Ki) is defined as 1024, and it can coexist beside kilo- meaning 1000.
The whole point of the metric system is to create universal units whose meanings don't change depending on context. Having kilo- be overloaded (like method overloading) to mean 1000 and 1024 violates this principle.
If you want to wade in the bad old world of context-dependent units, look no further than traditional measures. International mile or nautical mile? Pound avoirdupois or Troy pound? Pound-force or pound-mass? US gallon or UK gallon? US shoe size for children, women, or men? Short ton or long ton? Did you know that just a few centuries ago, every town had a different definition of a foot and pound, making trade needlessly complicated and inviting open scams and frauds?
> The computer industry abused these definitions because 1000 is close to 1024, creating endless confusion.
They didn't abuse the definitions. It's simply the result of dealing with pins, wires, and bits. For your problems, for example, you won't ever have a system with 1 "MB" of RAM where that's 1,000,000 bytes. The 8086 processor had 20 address lines, 2^20, that's 1,048,576 bytes for 1MB. SI units make no sense for computers.
The only problem is unscrupulous hardware vendors using SI units on computers to sell you less capacity but advertise more.
Yes they did. Kilo- means 1000 in SI/metric. The computer industry decided, "Gee that looks awfully close to 1024. Let's sneakily make it mean 1024 in our context and sell our RAM that way".
> It's simply the result of dealing with pins, wires, and bits. For your problems, for example, you won't ever have a system with 1 "MB" of RAM where that's 1,000,000 bytes.
I'm not disputing that. I'm 100% on board with RAM being manufactured and operated in power-of-2 sizes. I have a problem with how these numbers are being marketed and communicated.
> SI units make no sense for computers.
Exactly! Therefore, use IEC 60027 prefixes like kibi-, because they are the ones that reflect the binary nature of computers. Only use SI if you genuinely respect SI definitions.
> Exactly! Therefore, use IEC 60027 prefixes like kibi-, because they are the ones that reflect the binary nature of computers. Only use SI if you genuinely respect SI definitions.
You have to sort of remember that these didn't exist at the time that "kilobyte" came around. The binary prefixes are — relatively speaking — very new.
I'm happy to say it isn't an SI unit. Kilo meaning 1000 makes no sense for computers, so lets just never use it to mean that.
> Therefore, use IEC 60027 prefixes like kibi-,
No. They're dumb. They sound stupid, they were decades too late, etc. This was a stupid plan. We can define Kilo as 1024 for computers -- we could have done that easily -- and just don't call them SI units if that makes people weird. This is how we all actually work. So rather than be pedantic about it lets make the language and units reflect their actual usage. Easy.
That's still wrong and you've solved nothing. 32 Gb = 32 000 000 000 bits = 4 000 000 000 bytes = 4 GB (real SI gigabytes).
If you think 32 Gb are binary gibibits, then you've disagreed with Ethernet (e.g. 2.5 Gb/s), Thunderbolt (e.g. 40 Gb/s), and other communication standards.
That's why I keep hammering on the same point: Creating context-dependent prefixes sows endless confusion. The only way to stop the confusion is to respect the real definitions.
But really!?
I'll keep calling it in nice round powers of two, thank you very much.
All they had to say was that the KiB et. al. were introduced in 1998, and the adoption has been slow.
And not “but a kilobyte can be 1000,” as if it’s an effort issue.
In my mind base 10 only became relevant when disk drive manufacturers came up with disks with "weird" disk sizes (maybe they needed to reserve some space for internals, or it's just that the disk platters didn't like powers of two) and realised that a base 10 system gave them better looking marketing numbers. Who wants a 2.9TB drive when you can get a 3TB* drive for the same price?
Three binary terabytes i.e. 3 * 2^40 is 3298534883328, or 298534883328 more bytes than 3 decimal terabytes. The latter is 298.5 decimal gigabytes, or 278 binary gigabytes.
Indeed, early hard drives had slightly more than even the binary size --- the famous 10MB IBM disk, for example, had 10653696 bytes, which was 167936 bytes more than 10MB --- more than an entire 160KB floppy's worth of data.
That is to say, all the (high-end/“gamer”) consumer SSDs that I’ve checked use 10% overprovisioning and achieve that by exposing a given number of binary TB of physical flash (e.g. a “2TB” SSD will have 2×1024⁴ bytes’ worth of flash chips) as the same number of decimal TB of logical addresses (e.g. that same SSD will appear to the OS as 2×1000⁴ bytes of storage space). And this makes sense: you want a round number on your sticker to make the marketing people happy, you aren’t going to make non-binary-sized chips, and 10% overprovisioning is OK-ish (in reality, probably too low, but consumers don’t shop based on the endurance metrics even if they should).
TLC flash actually has a total number of bits that's a multiple of 3, but it and QLC are so unreliable that there's a significant amount of extra bits used for error correction and such.
SSDs haven't been real binary sizes since the early days of SLC flash which didn't need more than basic ECC. (I have an old 16MB USB drive, which actually has a user-accessible capacity of 16,777,216 bytes. The NAND flash itself actually stores 17,301,504 bytes.)
They communicate via the network, right? And telephony has always been in base 10 bits as opposed to base two eight bit bytes IIUC. So these two schemes have always been in tension.
So at some point the Ki, Mi, etc prefixes were introduced along with b vs B suffixes and that solved the issue 3+ decades ago so why is this on the HN front page?!
A better question might be, why do we privilege the 8 bit byte? Shouldn't KiB officially have a subscript 8 on the end?
Okay, but what do you mean by “10”?
It's really not all that crazy of a situation. What bothers me is when some applications call KiB KB, because they are old or lazy.
It should be "kelvin" here. ;)
Unit names are always lower-case[1] (watt, joule, newton, pascal, hertz), except at the start of a sentence. When referring to the scientists the names are capitalized of course, and the unit symbols are also capitalized (W, J, N, Pa, Hz).
[1] SI Brochure, Section 5.3 "Unit Names" https://www.bipm.org/documents/20126/41483022/SI-Brochure-9-...
Also If you open major Linux distro task managers, you'll be surprised to see that they often show in decimal units when "i" is missing from the prefix. Many utilities often avoid the confusing prefixes "KB", "MB"... and use "KiB", "MiB"...
Why do you keep insisting the author is denying something when the author clearly acknowledges every single thing you're complaining about?
Sometimes, other systems just make more sense.
For example, for time, or angles, or bytes. There are properties of certain numbers (or bases) that make everything descending from them easier to deal with.
for angles and time (and feet): https://en.wikipedia.org/wiki/Superior_highly_composite_numb...
For other problems we use base 2, 3, 8, 16, or 10.
Must we treat metric as a hammer, and every possible problem as a nail?
The ancient Sumerians used multiples of 60, as we continue to do for time and angles (which are related) today. It makes perfect sense. 60 is divisible by 2, 3, 4, 5, and 6, which makes it easy to use in calculations. Even the metric people are not so crazy as to propose replacing these with powers of 10.
Same with pounds, for example. A pound is 16 ounces, which can be divided 4 times without involving any fractions. Try that with metric.
Then there's temperature. Fahrenheit just works more naturally over the human-scale temperature range without involving fractions. Celsius kind of sucks by comparison.
No, they were absolutely that crazy [1]. Luckily the proposal fell through.
1. https://en.wikipedia.org/wiki/Decimal_time
Well I guess we already basically have this in practice since Ki can be shortened to K seeing as metric prefixes are always lower case and we clearly aren't talking about kelvin bytes.
* Yeah, I read the article. Regardless of the IEC's noble attempt, in all my years of working with people and computers I've never heard anyone actually pronounce MiB (or write it out in full) as "mebibyte".
Sectors per track or tracks per side is subject to change. Moreover a different filesystem may have non-linear growth of the MFT/superblock that'll have a different overhead.
https://en.wikipedia.org/wiki/List_of_floppy_disk_formats
It doesn't matter. "kilo" means 1000. People are free to use it wrong if they wish.
“Kilo” can mean what we want in different contexts and it’s really no more or less correct as long as both parties understand and are consistent in their usage to each other.
It's also stupid because it's rare than anyone outside of programming even needs to care exactly how many bytes something else. At the scales that each of kilobyte, megabyte, gigabyte, terabyte etc are used, the smaller values are pretty much insignificant details.
If you ask for a kilogram of rice, then you probably care more about that 1kg of rice is the same as the last 1kg of rice you got, you probably wouldn't even care how many grams that is. Similarly, if you order 1 ton of rice, you do care exactly how many grams it is, or do you just care that this 1 ton is the same as that 1 ton?
This whole stupidity started because hard disk manufacturers wanted to make their drives sound bigger than they actually were. At the time, everybody buying hard disks knew about this deception and just put up with it. We'd buy their 2GB drive and think to ourselves, "OK so we have 1.86 real GB". And that was the end of it.
Can you just imagine if manufacturers started advertising computers as having 34.3GB of RAM? Everybody would know it was nonsense and call it 32GB anyway.
Before all that nonsense, it was crystal clear: a megabyte in storage was unambiguously 1024 x 1024 bytes --- with the exception of crooked mass storage manufacturers.
There was some confusion, to be sure, but the partial success of attempt to redefine the prefixes to their power-of-ten meanings has caused more confusion.
We agree to meaning to communicate and progress without endless debate and confusion.
SI is pretty clear for a reason.
We decidedly do not do that. There's a whole term for new terms that arbitrarily get injected or redefined by new people: "slang". I don't understand a lot of the terms teenagers say now, because there's lots of slang that I don't know because I don't use TikTok and I'm thirty-something without kids so I don't hang out with teenagers.
I'm sure it was the same when I was a teenager, and I suspect this has been going on since antiquity.
New terms are made up all the time, but there's plenty of times existing words get redefined. An easy one, I say "cool" all the time, but generally I'm not talking about temperature when I say it. If I said "cool" to refer to something that I like in 1920's America, they would say that's not the correct use of the word.
SI units are useful, but ultimately colloquialisms exist and will always exist. If I say kilobyte and mean 1024 bytes, and if the person on the other end knows that I mean 1024 bytes, that's fine and I don't think it's "nihilistic".
https://en.wikipedia.org/wiki/Language_planning
(Then you could decide what you think about language planning.)
(And by that I mean "what the fuck, no...")
> All words are made up.
Yes, and the made up words of kilo and kibi were given specific definitions by the people who made them up:
* https://en.wikipedia.org/wiki/Metric_prefix
* https://en.wikipedia.org/wiki/Binary_prefix
> […] as long as both parties understand and are consistent in their usage to each other.
And if they don't? What happens then?
Perhaps it would be easier to use the words definitions as they are set up in standards and regulations so context is less of an issue.
* https://xkcd.com/1860/
Good for them. People make up their own definitions for words all the time. Some of those people even try to get others to adopt their definition. Very few are ever successful. Because language is about communicating shared meaning. And there is a great deal of cultural inertia behind the kilo = 2^10 definition in computer science and adjacent fields.
Kilo was generally understood to mean one thousand long before it was adopted by a standards committee. I know the French love to try and prescribe the use of language, but in most of the world words just mean what people generally understand them to mean; and that meaning can change.
If you're talking loosely, then you can get away with it.
That being said, I think the difference between mib and mb is niche for most people
90 mm floppy disks. https://jdebp.uk/FGA/floppy-discs-are-90mm-not-3-and-a-half-...
Which I have taken to calling 1440 KiB – accurate and pretty recognizable at the same time.
Donald Knuth himself said[1]:
> The members of those committees deserve credit for raising an important issue, but when I heard their proposal it seemed dead on arrival --- who would voluntarily want to use MiB for a maybe-byte?! So I came up with the suggestion above, and mentioned it on page 94 of my Introduction to MMIX. Now to my astonishment, I learn that the committee proposals have actually become an international standard. Still, I am extremely reluctant to adopt such funny-sounding terms; Jeffrey Harrow says "we're going to have to learn to love (and pronounce)" the new coinages, but he seems to assume that standards are automatically adopted just because they are there.
If Gordon Bell and Gene Amdahl used binary sizes -- and they did -- and Knuth thinks the new terms from the pre-existing units sound funny -- and they do -- then I feel like I'm in good company on this one.
0: https://honeypot.net/2017/06/11/introducing-metric-quantity....
1: https://www-cs-faculty.stanford.edu/~knuth/news99.html
> I'm a big fan of binary numbers, but I have to admit that this convention flouts the widely accepted international standards for scientific prefixes.
He also calls it “an important issue” and had written “1000 MB = 1 gigabyte (GB), 1000 GB = 1 terabyte (TB), 1000 TB = 1 petabyte (PB), 1000 PB = 1 exabyte (EB), 1000 EB = 1 zettabyte (ZB), 1000 ZB = 1 yottabyte (YB)” in his MMIX book even before the new binary prefixes became an international standard.
He is merely complaining that the new names for the binary prefixes sound funny (and has his own proposal like “large megabyte” and notation MMB etc), but he's still using the kilo/mega/etc prefixes with decimal meanings.
Ummm, what? https://en.wikipedia.org/wiki/Metric_prefix
No, they already did the opposite with KiB, MiB.
Because most metric decimal units are used for non-computing things. Kilometers, etc. Are you seriously proposing that kilometers should be renamed kitrimeters because you think computing prefixes should take priority over every other domain of science and life?
It would be annoying of one frequently found themselves calculating gigabytes per hectare. I don't think I've ever done that. The closest I've seen is measure magnetic tape density where you get weird units like "characters per inch", where neither "character" nor "inch" are the common units for their respective metrics.
E.g. Macs measure file sizes in powers of 10 and call them KB, MB, GB. Windows measures file sizes in powers of 2 and calls them KB, MB, GB instead of KiB, MiB, GiB. Advertised hard drives come in powers of 10. Advertised memory chips come in powers of 2.
When you've got a large amount of data or are allocating an amount of space, are you measuring its size in memory or on disk? On a Mac or on Windows?
Especially that it was only partially successful.
Which is not to say that there had been zero confusion; but it was only made worse.
And in most cases, using 1024 is more convenient because the sizes of page sizes, disk sectors, etc. are powers of 2.
That doesn't conform to SI. It should be written as kB mB gB. Ambiguity will only arise when speaking.
> Advertised hard drives come in powers of 10.
Mass storage (kB) has its own context at this point, distinct from networking (kb/s) and general computing (KB).
> When you've got a large amount of data or are allocating an amount of space, ...
You aren't speaking but are rather working in writing. kb, kB, Kb, and KB refer to four different unit bit counts and there is absolutely zero ambiguity. The only question that might arise (depending on who you ask) is how to properly verbalize them.
Little m is milli, big M is mega. Little g doesn’t exist, only big G.
I disagreed strongly - I think X-per-second should be decimal, to correspond to Hertz. But for quantity, binary seems better. (modern CS papers tend to use MiB, GiB etc. as abbreviations for the binary units)
Fun fact - for a long time consumer SSDs had roughly 7.37% over-provisioning, because that's what you get when you put X GB (binary) of raw flash into a box, and advertise it as X GB (decimal) of usable storage. (probably a bit less, as a few blocks of the X binary GB of flash would probably be DOA) With TLC, QLC, and SLC-mode caching in modern drives the numbers aren't as simple anymore, though.
The decimal-vs-binary discrepancy is used more as slack space to cope with the inconvenience of having to erase whole 16MB blocks at a time while allowing the host to send write commands as small as 512 bytes. Given the limited number of program/erase cycles that any flash memory cell can withstand, and the enormous performance penalty that would result from doing 16MB read-modify-write cycles for any smaller host writes, you need way more spare area than just a small multiple of the erase block size. A small portion of the spare area is also necessary to store the logical to physical address mappings, typically on the order of 1GB per 1TB when tracking allocations at 4kB granularity.
Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.
Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.
56000 BPS was the bitrate you could get out of a DS0 channel, which is the digital version of a normal phone line. A DS0 is actually 64000 BPS, but 1 bit out of 8 is "robbed" for overhead/signalling. An analog phone lined got sampled to 56000 BPS, but lines were very noisy, which was fine for voice, but not data.
7 bits per sample * 8000 samples per second = 56000, not 57600. That was theoretical maximum bandwidth! The FCC also capped modems at 53K or something, so you couldn't even get 56000, not even on a good day.
I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.
Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.
RAM had binary sizing for perfectly practical reasons. Nothing else did (until SSDs inherited RAM's architecture).
We apply it to all the wrong things mostly because the first home computers had nothing but RAM, so binary sizing was the only explanation that was ever needed. And 50 years later we're sticking to that story.
- magnetic media
- optical media
- radio waves
- time
There's good reasons for having power-of-2 sectors (they need to get loaded into RAM), but there's really no compelling reason to have a power-of-2 number of sectors. If you can fit 397 sectors, only putting in 256 is wasteful.
The choice would be effectively arbitrary, the number of actual bits or bytes is the same regardless of the multiplier that you use. But since it's for a computer, it makes sense to use units that are comparable (e.g. RAM and HD).
Just later, some marketing assholes thought they could better sell their hard drives when they lie about the size and weasel out of legal issues with redefining the units.
There are probably cases where corresponding to Hz, is useful, but for most users I think 119MiB/s is more useful than 1Gbit/s.
(The old excuse was that networks are serial but they haven't been serial for decades.)
the same and even more confusion is engendered when talking about "fifths" etc.
I gave some examples in my post https://blog.zorinaq.com/decimal-prefixes-are-more-common-th...
Storage capacity also uses binary prefixes. The distinction here isn't that file sizes are reported in binary numbers and storage capacity is reported in decimal numbers. It's that software uses binary numbers and hard drive manufacturers use decimal numbers. You don't see df reporting files in binary units and capacities in decimal units.
Of that large list of measurements, only bandwidth is measured in bytes, making the argument mostly an exercise in sophistry. You can't convince anyone that KB means 1000 bytes by arguing that kHz means 1000 Hz.
Call me calcitrant, reactionary, or whatever, but I will not say kibibyte out loud. It's a dumb word and I'm not using it. It was a horrible choice.
"I bought a two tib SSD."
"I just want to serve five pibs."
no you didn't, that doesn't exist, you bought 2 trillion bytes, 99 billion bytes short
The author doesn’t actually answer their question, unless I missed something?
They go on to make a few more observations, and say finally only that the current different definitions are sometimes confusing, to non experts.
I don’t see much of an argument here for changing anything. Some non experts experience minor confusion about two things that are different, did I miss something bigger in this?
Because Windows, and only Windows, shows it this way. It is official and documented: https://devblogs.microsoft.com/oldnewthing/20090611-00/?p=17...
> Explorer is just following existing practice. Everybody (to within experimental error) refers to 1024 bytes as a kilobyte, not a kibibyte. If Explorer were to switch to the term kibibyte, it would merely be showing users information in a form they cannot understand, and for what purpose? So you can feel superior because you know what that term means and other people don’t.
If you really want to come at it from an information theory perspective, even the "byte" is rather arbitrary - the only thing that matters is the number of bits.
This ambiguity is documented at least back to 1984, by IBM, the pre-eminent computer company of the time.
In 1972 IBM started selling the IBM 3333 magnetic disk drive. This product catalog [0] from 1979 shows them marketing the corresponding disks as "100 million bytes" or "200 million bytes" (3336 mdl 1 and 3336 mdl 11, respectively). By 1984, those same disks were marketed in the "IBM Input/Output Device Summary"[1] (which was intended for a customer audience) as "100MB" and "200MB"
0: (PDF page 281) "IBM 3330 DISK STORAGE" http://electronicsandbooks.com/edt/manual/Hardware/I/IBM%20w...
1: (PDF page 38, labeled page 2-7, Fig 2-4) http://electronicsandbooks.com/edt/manual/Hardware/I/IBM%20w...
Also, hats off to http://electronicsandbooks.com/ for keeping such incredible records available for the internet to browse.
-------
Edit: The below is wrong. Older experience has corrected me - there has always been ambiguity (perhaps bifurcated between CPU/OS and storage domains). "And that with such great confidence!", indeed.
-------
The article presents wishful thinking. The wish is for "kilobyte" to have one meaning. For the majority of its existence, it had only one meaning - 1024 bytes. Now it has an ambiguous meaning. People wish for an unambiguous term for 1000 bits, however that word does not exist. People also might wish that others use kibibyte any time they reference 1024 bytes, but that is also wishful thinking.
The author's wishful thinking is falsely presented as fact.
I think kilobyte was the wrong word to ever use for 1024 bytes, and I'd love to go back in time to tell computer scientists that they needed to invent a new prefix to mean "1,024" / "2^10" of something, which kilo- never meant before kilobit / kilobyte were invented. Kibi- is fine, the phonetics sound slightly silly to native English speakers, but the 'bi' indicates binary and I think that's reasonable.
I'm just not going to fool myself with wishful thinking. If, in arrogance or self-righteousness, one simply assumes that every time they see "kilobyte" it means 1,000 bytes - then they will make many, many failures. We will always have to take care to verify whether "kilobyte" means 1,000 or 1,024 bytes before implementing something which relies on that for correctness.
There was always a confusion about whether a kilobyte was 1000 or 1024 bytes. Early diskettes always used 1000, only when the 8 bit home computer era started was the 1024 convention firmly established.
Before that it made no sense to talk about kilo as 1024. Earlier computers measured space in records and words, and I guess you can see how in 1960, no one would use kilo to mean 1024 for a 13 bit computer with 40 byte records. A kiloword was, naturally, 1000 words, so why would a kilobyte be 1024?
1024 bearing near ubiquitous was only the case in the 90s or so - except for drive manufacturing and signal processing. Binary prefixes didn't invent the confusion, they were a partial solution. As you point out, while it's possible to clearly indicate binary prefixes, we have no unambiguous notation for decimal bytes.
Even worse, the 3.5" HD floppy disk format used a confusing combination of the two. Its true capacity (when formatted as FAT12) is 1,474,560 bytes. Divide that by 1024 and you get 1440KB; divide that by 1000 and you get the oft-quoted (and often printed on the disk itself) "1.44MB", which is inaccurate no matter how you look at it.
But that said, we aren't talking about sector sizes. Of course storage mediums are always going to use sector sizes of powers of two. What's being talked about here is the confusion in how to refer to the storage medium's total capacity.
I wonder if there's a wikipedia article listing these...
Example: in 1972, DEC PDP 11/40 handbook [0] said on first page: "16-bit word (two 8-bit bytes), direct addressing of 32K 16-bit words or 64K 8-bit bytes (K = 1024)". Same with Intel - in 1977 [1], they proudly said "Static 1K RAMs" on the first page.
[0] https://pdos.csail.mit.edu/6.828/2005/readings/pdp11-40.pdf
[1] https://deramp.com/downloads/mfe_archive/050-Component%20Spe...
But once hard drives started hitting about a gigabyte was when everyone started noticing and howling.
Similarly, the 4104 chip was a "4kb x 1 bit" RAM chip and stored 4096 bits. You'd see this in the whole 41xx series, and beyond.
I was going to say that what it could address and what they called what it could address is an important distinction, but found this fun ad from 1976[1].
"16K Bytes of RAM Memory, expandable to 60K Bytes", "4K Bytes of ROM/RAM Monitor software", seems pretty unambiguous that you're correct.
Interestingly wikipedia at least implies the IBM System 360 popularized the base-2 prefixes[2], citing their 1964 documentation, but I can't find any use of it in there for the main core storage docs they cite[3]. Amusingly the only use of "kb" I can find in the pdf is for data rate off magnetic tape, which is explicitly defined as "kb = thousands of bytes per second", and the only reference to "kilo-" is for "kilobaud", which would have again been base-10. If we give them the benefit of the doubt on this, presumably it was from later System 360 publications where they would have had enough storage to need prefixes to describe it.
[1] https://commons.wikimedia.org/wiki/File:Zilog_Z-80_Microproc...
[2] https://en.wikipedia.org/wiki/Byte#Units_based_on_powers_of_...
[3] http://www.bitsavers.org/pdf/ibm/360/systemSummary/A22-6810-...
I don't know if that's correct, but at least it'd explain the mismatch.
That's the microcomputer era that has defined the vast majority of our relationship with computers.
IMO, having lived through this era, the only people pushing 1,000 byte kilobytes were storage manufacturers, because it allows them to bump their numbers up.
https://www.latimes.com/archives/la-xpm-2007-nov-03-fi-seaga...
More like late 60s. In fact, in the 70s and 80s, I remember the storage vendors being excoriated for "lying" by following the SI standard.
There were two proposals to fix things in the late 60s, by Donald Morrison and Donald Knuth. Neither were accepted.
Another article suggesting we just roll over and accept the decimal versions is here:
https://cacm.acm.org/opinion/si-and-binary-prefixes-clearing...
This article helpfully explains that decimal KB has been "standard" since the very late 90s.
But when such an august personality as Donald Knuth declares the proposal DOA, I have no heartburn using binary KB.
https://www-cs-faculty.stanford.edu/~knuth/news99.html
In fact, they practically say the same exact thing you have said: In a nutshell, base-10 prefixes were used for base-2 numbers, and now it's hard to undo that standard in practice. They didn't say anything about making assumptions. The only difference is that that the author wants to keep trying, and you don't think it's possible? Which is perfectly fine. It's just not as dramatic as your tone implies.
> The author's wishful thinking is falsely presented as fact.
There's good reason why the meanings of SI prefixes aren't set by convention or by common usage or by immemorial tradition, but by the SI. We had several thousand years of setting weights and measures by local and trade tradition and it was a nightmare, which is how we ended up with the SI. It's not a good show for computing to come along and immediately recreate the long and short ton.
Adding to your point, it is human nature to create industry- or context-specific units and refuse to play with others.
In the non-metric world, I see examples like: Paper publishing uses points (1/72 inch), metal machinists use thousands of an inch, woodworkers use feet and inches and binary fractions, land surveyors use decimal feet (unusual!), waist circumference is in inches, body height is in feet and inches, but you buy fabric by the yard, airplane altitudes are in hundreds to tens of thousands of feet instead of decimal miles. Crude oil is traded in barrels but gasoline is dispensed in gallons. Everyone thinks their usage of units and numbers is intuitive and optimal, and everyone refuses to change.
In the metric(ish) world, I still see many tensions. The micron is a common alternate name for the micrometre, yet why don't we have a millin or nanon or picon? The solution is to eliminate the micron. I've seen the angstrom (0.1 nm) in spectroscopy and in the discussion of CPU transistor sizes, yet it diverts attention away from the picometre. The bar (100 kPa) is popular in talking about things like tire pressure because it's nearly 1 atmosphere. The mmHg is a unit of pressure that sounds metric but is not; the correct unit is pascal. No one in astronomy uses mega/giga/tera/peta/etc.-metres; instead they use AU and parsec and (thousand, million, billion) light-years. Particle physics use eV/keV/MeV instead of some units around the picojoule.
Having a grab bag of units and domains that don't talk to each other is indeed the natural state of things. To put your foot down and say no, your industry does not get its own special snowflake unit, stop that nonsense and use the standardized unit - that takes real effort to achieve.
Here's my theory. In the beginning, everything was base10. Because humans.
Binary addressing made sense for RAM. Especially since it makes decoding address lines into chip selects (or slabs of core, or whatever) a piece of cake, having chips be a round number in binary made life easier for everyone.
Then early DOS systems (CP/M comes to mind particularly) mapped disk sectors to RAM regions, so to enable this shortcut, disk sectors became RAM-shaped. The 512-byte sector was born. File sizes can be written in bytes, but what actually matters is how many sectors they take up. So file sizing inherited this shortcut.
But these shortcuts never affected "real computers", only the hamstrung crap people were running at home.
So today we have multiple ecosystems. Some born out of real computers, some with a heavy DOS inheritance. Some of us were taught DOS's limitations as truth, and some of us weren't.
However it doesn't seem to be divided into sectors at all, more like each track is like a loop of magnetic tape. In that context it makes a bit more sense to use decimal units, measuring in bits per second like for serial comms.
Or maybe there were some extra characters used for ECC? 5 million / 100 / 100 = 500 characters per track, leaves 72 bits over for that purpose if the actual size was 512.
First floppy disks - also from IBM - had 128-byte sectors. IIRC, it was chosen because it was the smallest power of two that could store an 80-column line of text (made standard by IBM punched cards).
Disk controllers need to know how many bytes to read for each sector, and the easiest way to do this is by detecting overflow of an n-bit counter. Comparing with 80 or 100 would take more circuitry.
You can get away with those on machines with 64 bit address spaces and TFLOPs of math capacity. You can't on anything older or smaller.
You need character to admit that. I bow to you.
Kudos for getting back. (and closing the tap of "you are wrong" comments :))
Which makes it really @#ing annoying when you have things like "I want to transmit 8 gigabytes (meaning gibibytes, 2*30) over a 1 gigabit/s link, how long will it take?". Welcome to every networking class in the 90s.
We should continue moving towards a world where 2*k prefixes have separate names and we use SI prefixes only for their precise base-10 meanings. The past is polluted but we hopefully have hundreds of years ahead of us to do things better.
Which doesn't make it more correct, of course, even through I strongly believe believe that it is (where appropriate for things like memory sizes). Just saying, it goes much further back than 1984.
Which is the reality. "kilobyte" means "1000 bytes". There's no possible discussion over this fact.
Many people have been using it wrong for decades, but its literal value did not change.
You are free to intend only one meaning in your own communication, but you may sometimes find yourself being misunderstood: that, too, is reality.
E.g., M-W lists both, with even the 1,024 B definition being listed first. Wiktionary lists the 1,024 B definition, though it is tagged as "informal".
As a prescriptivist myself I would love if the world could standardize on kilo = 1000, kibi = 1024, but that'll likely take some time … and the introduction of the word to the wider public, who I do not think is generally aware of the binary prefixes, and some large companies deciding to use the term, which they likely won't do, since companies are apt to always trade for low-grade perpetual confusion over some short-term confusion during the switch.
This is a myth. The first IBM harddrive was 5,000,000 characters in 1956 - before bytes were even common usage. Drives have always been base10, it's not a conspiracy.
Drives are base10, lines are base10, clocks are base10, pretty much everything but RAM is base10. Base2 is the exception, not the rule.
You can say that one meaning is more correct than the other, but that doesn't vanish the other meaning from existence.
Now, it depends.
Yeah, I already knew that, lol.
But thanks for bringing it to my attention. :-)
In fact, this is the only case I can think of where that has ever happened.
If we are talking about kilobytes, it could just as easily the opposite.
Unless you were referring to only contracts which you yourself draft, in which case it'd be whatever you personally want.
https://www-cs-faculty.stanford.edu/~knuth/news99.html
And he was right.
Context is important.
"K" is an excellent prefix for 1024 bytes when working with small computers, and a metric shit ton of time has been saved by standardizing on that.
When you get to bigger units, marketing intervenes, and, as other commenters have pointed out, we have the storage standard of MB == 1000 * 1024.
But why is that? Certainly it's because of the marketing, but also it's because KB has been standardized for bytes.
> Which is the reality. "kilobyte" means "1000 bytes". There's no possible discussion over this fact.
You couldn't be more wrong. Absolutely nobody talks about 8K bytes of memory and means 8000.
"I will not sacrifice my dignity. We've made too many compromises already; too many retreats. They invade our space and we fall back. They assimilate entire worlds with awkward pronunciations. Not again. The line must be drawn here! This far, no further! And I will make them pay for what they've done to the kilobyte!"
SI units are attempting to fix standard measurements with perceived constants in nature. A meter(Distance) is the distance light travels in a vacuum, back and forth, within a certain amount of ossilations of a cesium atom(Time). This doesn't mean we tweak the meter to conform to observational results as we'd all be happier if light really was 300 000KM/s instead of ~299 792km/s.
Then there's the problem of not mixing different measurement units. SI was designed to conform all measurements to the same base 10 exponents (cm, m, km versus feet inches and yards) But the authors attempt to resolve this matter doesn't even conform to standardised SI units as we would expect them to.
What is a byte? Well, 8 bits, sometimes. What is a kilobit? 1000 Bits What is a kilobyte? 1000 Bytes, or 1024 Bytes.
Now we've already mixed units based on what a bit or a byte even is and the addition of the 8 multiplier in addition to the exponent of 1000 or 1024.
And if you think, hey, at least the bit is the least divisible unit of information, That's not even correct. If there Should* be a reformalisation of information units, you would agree that the amount of "0"'s is the least divisible unit of information. A kilo of zero's, would be 1000. A 'byte' would be defined as containing up to 256 zero's. A Megazero would contain up to a million zero's.
It wouldn't make any intuitive sense for anyone to count 0's, which would automatically convert your information back to base 10, but it does prove that the most sensible unit of information is already what we've had before, that is, you're not mixing bytes (powers of 2) with SI-defined units of 1000
“A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time.”
You can use `--si` for fake, 1000-byte kilobytes - trying it it seems weird that these are reported with a lowercase 'k' but 'M' and so on remain uppercase.
For SI units, the abbreviations are defined, so a lowercase k for kilo and uppercase M for mega is correct. Lower case m is milli, c is centi, d is deci. Uppercase G is giga, T is tera and so on.
https://en.wikipedia.org/wiki/International_System_of_Units#...
People who say things like kibibyte usually have no sense of humor, and no tolerance for inconsistencies.
ketchi means stingy in Japanese (careful with that word because it is informal and negative). I propose we rename kibibyte to ketchibaito. ketchibaito could also take on a double meaning as denoting badly paid part-time work.
(One word having two meanings: don't that just make the kibibyte people's puny heads explode ...)
KB is 1024 bytes, and don't you dare try stealing those 24 bytes from me
https://en.wikipedia.org/wiki/DRAM_price_fixing_scandal
Anyway, here's my contribution to help make everything worse. I think we should use Kylobyte, etc. when we don't care whether it's 1000 or 1024. KyB. See! Works great.
The misuse of those prefixes as powers of 1024, while useful as shorthand for computer memory where binary addressing means, is still exactly that: a misuse of SI prefixes.
There's now a separate set of base-2 prefixes to solve this, and people need to update their language accordingly.
I need to update my language accordingly? No thanks. I'll keep saying what I say and nothing will happen.
Eg https://en.wikipedia.org/wiki/Kilobyte
What the hell is a "kibibyte"? Sounds like a brand of dog food.
I don't know what the better alternative would have been, but this certainly wasn't it.
1. defined traditional suffixes and abbreviations to mean powers of two, not ten, aligning with most existing usages, but...
2. deprecated their use, especially in formal settings...
3. defined new spelled-out vocabulary for both pow10 and pow2 units, e.g. in English "two megabytes" becomes "two binary megabytes" or "two decimal megabytes", and...
4. defined new unambiguous abbreviations for both decimal and binary units, e.g. "5MB" (traditional) becomes "5bMB" (simplified, binary) or "5dMB" (simplified, decimal)
This way, most people most of the time could keep using the traditional units and be understood just fine, but in formal contexts in which precision is paramount, you'd have a standard way of spelling out exactly what you meant.
I'd have gone one step further too and stipulate that truth in advertising would require storage makers to use "5dMB" or "5 decimal megabytes" or whatever in advertising and specifications if that's what they meant. No cheating using traditional units.
(We could also split bits versus bytes using similar principles, e.g. "bi" vs "by".)
I mean consider UK, which still uses pounds, stone, and miles. In contexts where you'd use those units, writing "10KB" or "one megabyte" would be fine too.
Yeah it sounds dumb, but it’s really not that different from your suggestion.
It's leagues better than "kibibyte".
It's the same reason—for pure marketing purposes—that screens are measured diagonally.
It's easy to find some that are marketed as 500GB and have 500x10^9 bytes [0]. But all the NVMe's that I can find that are marketed as 512GB have 512x10^9 bytes[1], neither 500x10^9 bytes nor 2^39 bytes. I cannot find any that are labeled "1TB" and actually have 1 Tebibyte. Even "960GB" enterprise SSD's are measured in base-10 gigabytes[2].
0: https://download.semiconductor.samsung.com/resources/data-sh...
1: https://download.semiconductor.samsung.com/resources/data-sh...
2: https://image.semiconductor.samsung.com/resources/data-sheet...
(Why are these all Samsung? Because I couldn't find any other datasheets that explicitly call out how they define a GB/TB)
It would be nice to have a different standard for decimal vs. binary kilobytes.
But if Don Knuth thinks that the "international standard" naming for binary kilobytes is dead on arrival, who am I to argue?
https://www-cs-faculty.stanford.edu/~knuth/news99.html
Approximating metric prefixing with kibi, Mibi, Gibi... is confusing because it doesn't make sense semantically. There is nothing base-10-ish about it.
I propose some naming based on shift distance, derived from the latin iterativum. https://en.wikipedia.org/wiki/Latin_numerals#Adverbial_numer...
* 2^10, the kibibyte, is a deci (shifted) byte, or just a 'deci'
* 2^20, the mibibyte, is a vici (shifted) byte, or a 'vici'
* 2^30, the gibibyte, is a trici (shifted) byte, or a 'trici'
I mean, we really only need to think in bytes for memory addressing, right? The base doesn't matter much, if we were talking exabytes, does it?
Why don’t kilobyte continue to mean 1024 and introduce kilodebyte to mean 1000. Byte, to me implies a binary number system, and if you want to introduce a new nomenclature to reduce confusion, give the new one a new name and let the older of more prevalent one in its domain keep the old one…
Many things acquire domain specific nuanced meaning ..
"in binary computing traditionally prefix + byte implied binary number quantities."
There are no bytes involved in Hz or FLOPs.
Because it never did!
Agreed. For the naysayers out there, consider these problems:
* You have 1 "MB" of RAM on a 1 MHz system bus which can transfer 1 byte per clock cycle. How many seconds does it take to read the entire memory?
* You have 128 "GB" of RAM and you have an empty 128 GB SSD. Can you successfully hibernate the computer system by storing all of RAM on the SSD?
* My camera shoots 6000×4000 pixels = exactly 24 megapixels. If you assume RGB24 color (3 bytes per pixel), how many MB of RAM or disk space does it take to store one raw bitmap image matrix without headers?
The SI definitions are correct: kilo- always means a thousand, mega- always means a million, et cetera. The computer industry abused these definitions because 1000 is close to 1024, creating endless confusion. It is a idiotic act of self-harm when one "megahertz" of clock speed is not the same mega- as one "megabyte" of RAM. IEC 60027 prefixes are correct: there is no ambiguity when kibi- (Ki) is defined as 1024, and it can coexist beside kilo- meaning 1000.
The whole point of the metric system is to create universal units whose meanings don't change depending on context. Having kilo- be overloaded (like method overloading) to mean 1000 and 1024 violates this principle.
If you want to wade in the bad old world of context-dependent units, look no further than traditional measures. International mile or nautical mile? Pound avoirdupois or Troy pound? Pound-force or pound-mass? US gallon or UK gallon? US shoe size for children, women, or men? Short ton or long ton? Did you know that just a few centuries ago, every town had a different definition of a foot and pound, making trade needlessly complicated and inviting open scams and frauds?
They didn't abuse the definitions. It's simply the result of dealing with pins, wires, and bits. For your problems, for example, you won't ever have a system with 1 "MB" of RAM where that's 1,000,000 bytes. The 8086 processor had 20 address lines, 2^20, that's 1,048,576 bytes for 1MB. SI units make no sense for computers.
The only problem is unscrupulous hardware vendors using SI units on computers to sell you less capacity but advertise more.
Yes they did. Kilo- means 1000 in SI/metric. The computer industry decided, "Gee that looks awfully close to 1024. Let's sneakily make it mean 1024 in our context and sell our RAM that way".
> It's simply the result of dealing with pins, wires, and bits. For your problems, for example, you won't ever have a system with 1 "MB" of RAM where that's 1,000,000 bytes.
I'm not disputing that. I'm 100% on board with RAM being manufactured and operated in power-of-2 sizes. I have a problem with how these numbers are being marketed and communicated.
> SI units make no sense for computers.
Exactly! Therefore, use IEC 60027 prefixes like kibi-, because they are the ones that reflect the binary nature of computers. Only use SI if you genuinely respect SI definitions.
You have to sort of remember that these didn't exist at the time that "kilobyte" came around. The binary prefixes are — relatively speaking — very new.
I'm happy to say it isn't an SI unit. Kilo meaning 1000 makes no sense for computers, so lets just never use it to mean that.
> Therefore, use IEC 60027 prefixes like kibi-,
No. They're dumb. They sound stupid, they were decades too late, etc. This was a stupid plan. We can define Kilo as 1024 for computers -- we could have done that easily -- and just don't call them SI units if that makes people weird. This is how we all actually work. So rather than be pedantic about it lets make the language and units reflect their actual usage. Easy.
32 Gb ram chip = 4 GiB of RAM.
If you think 32 Gb are binary gibibits, then you've disagreed with Ethernet (e.g. 2.5 Gb/s), Thunderbolt (e.g. 40 Gb/s), and other communication standards.
That's why I keep hammering on the same point: Creating context-dependent prefixes sows endless confusion. The only way to stop the confusion is to respect the real definitions.