Archived from groups: comp.sys.ibm.pc.hardware.storage (
More info?)
In article <c4uvid$2o5760$5@ID-79662.news.uni-berlin.de>,
Folkert Rienstra <folkertdotrienstra@freeler.nl> wrote:
>"J. Clarke" <jclarke@nospam.invalid> wrote in message news:c4q9a
j010m6@news3.newsguy.com
>> Tom Scales wrote:
>>
>> > I'm not trying to start a fight, but there ARE 8 bits in a byte. Surely
>> > you're joking that you think it is 10.
>>
>> Well, actually there are however many bits in a byte the machine designer
>> chose to put there. All the currently popular machines have 8-bit bytes so
>> 8 bits has come to be assumed but there is nothing sacred about that number.
>
>That may be so for words, but not bytes.
>The PDP* had 12-bit words but a byte was still 8 bits, afaik.
>
From the Jargon File (aka The New hacker's Dictionary)
byte /bi:t/ n.
[techspeak] A unit of memory or data equal to the amount used to
represent one character; on modern architectures this is usually 8
bits, but may be 9 on 36-bit machines. Some older architectures used
`byte' for quantities of 6 or 7 bits, and the PDP-10 supported `bytes'
that were actually bitfields of 1 to 36 bits! These usages are now
obsolete, and even 9-bit bytes have become rare in the general trend
toward power-of-2 word sizes.
Historical note: The term was coined by Werner Buchholz in 1956 during
the early design phase for the IBM Stretch computer; originally it was
described as 1 to 6 bits (typical I/O equipment of the period used
6-bit chunks of information). The move to an 8-bit byte happened in
late 1956, and this size was later adopted and promulgated as a
standard by the System/360. The word was coined by mutating the word
`bite' so it would not be accidentally misspelled as bit. See also
nybble.
Does this put an end to this thread, please ?
>>
>> When talking about data communications it's important to consider exactly
>> what you mean by "throughput". If you count every bit that goes down the
>> wire you get one number. If you discount the bits that carry the overhead
>> of the data-link protocol then you get another number. If you discount the
>> bits that carry the overhead of the transport protocol you get a third, and
>> so on. In data communications a byte is often assumed to be ten bits to
>> allow for protocol overhead and get a more realistic view of actual
>>throughput.
>>
>>
>> > Tom
>> > "Eric Gisin" ericgisin@graffiti.net> wrote in message
>> > news:c4pqju0r1@enews1.newsguy.com...
>> > > "Tom Scales" tomtoo@softhome.net> wrote in message news:Rb6dna
>> > > c4faaW_-3dRVn-hw@comcast.com...
>> > > > Why in the world would you divide by 10?
>> > > >
>> > > Because that's how 8 bits are encoded. Read the spec.
>> > >
>> > > > There are EIGHT, count'em EIGHT (8) BITS in a BYTE.
>> > > >
>> > > > You divide by 8.
>> > > >
>> > > Nonsense.
>> > >
>> > > > "Eric Gisin" ericgisin@graffiti.net> wrote in
>> > > > message news:c4pg500ufl@enews4.newsguy.com...
>> > > > > Wrong. 1500Mb/10 is 150MB.
--
Al Dykes
-----------
adykes at p a n i x . c o m