Searching \ for '[EE] Why don't baud rates just double?' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/index.htm?key=why+dont+baud+rates
Search entire site for: 'Why don't baud rates just double?'.

Exact match. Not showing close matches.
PICList Thread
'[EE] Why don't baud rates just double?'
2004\01\20@192023 by James Newton, Host

face picon face
Why is each higher baud rate twice as fast until 14.4K which is only 1.5
times faster than 9600?

115,200
57,600
28,800
14,400
   ?
 9,600
 4,800
 2,400
 1,200
   600
   300

---
James Newton: PICList webmaster/Admin
spam_OUTjamesnewtonTakeThisOuTspampiclist.com  1-619-652-0593 phone
http://www.piclist.com/member/JMN-EFP-786
PIC/PICList FAQ: http://www.piclist.com

--
http://www.piclist.com#nomail Going offline? Don't AutoReply us!
email .....listservKILLspamspam@spam@mitvma.mit.edu with SET PICList DIGEST in the body

2004\01\20@193519 by David P Harris

picon face
You did not list: 19200, 38400, ... 19200 is very common.

I think the 14.4 and 28.8 modems has 'caused' this effect, and they are
caused by limitations of the technology at the time :-)

David

James Newton, Host wrote:

{Quote hidden}

--
http://www.piclist.com#nomail Going offline? Don't AutoReply us!
email EraseMElistservspam_OUTspamTakeThisOuTmitvma.mit.edu with SET PICList DIGEST in the body

2004\01\20@200739 by Ian McLean

flavicon
face
There is quite a lengthy history behind this.  To summarise in a crude
fashion, telephone lines can only really handle about 9600 max.  By using
some clever compression and error handling techniques, they increased the
speed to 14.4k, 28.8k and then 33.6k.  Then, hitting a barrier with that
technology, by rethinking the problem and using DSP to handle some really
clever compression, they upped the speed again to 56k, which started with
56k flex, and then the very clever V.90.  Then, by rethinking the problem
completely again, introduced FFT and cleverer DSP's, and came up with ADSL.
I think this explains why thinks get different after 9600 baud.  The guys
who invented these techniques were indeed very clever.

Rgs
Ian.

> {Original Message removed}

2004\01\20@201610 by David P Harris

picon face
Hi-
Well ADSL really uses the bandwidth available that is not the original
telephone bandwidth, i.e. it is broad band with the telephone
frequencies deleted.  Still damn clever...
David

Ian McLean wrote:

{Quote hidden}

>>{Original Message removed}

2004\01\20@205350 by Roy J. Gromlich

picon face
ADSL actually uses what could be referred to as a Radio Frequency carrier
with the data modulated on top of it in much the same fashion as is done in
a Cable Modem.  Now it isn't really surprising that cable can carry
broadband signals - cable system regularly carry frequencies from roughly 30
MHz at the low end to 1 GHz plus at the high end. Using some bandwidth in
the 100s of MHz range to carry data streams is pretty obvious.  What was
clever was to allow as single-direction system to carry data in both
directions.  That is why your Download speed is probably 10X your upload
speed (on cable).

Now the really bright folks who designed ADSL did something which most of us
thought was impossible - they managed to send & receive radio frequency
carriers over an ordinary twisted pair POTS line.  The little filter unit
which you plug into RJ11s where you have telephones blocks the RF from the
phone so as not to overload the phone electronics AND also to prevent signal
absorption of the RF carrier in the phone circuits. That they made it work
is a testament to good engineering, IMHO.

Roy J. Gromlich

{Original Message removed}

2004\01\20@212812 by Jake Anderson

flavicon
face
my understanding is that standard phone calls run at about 64kbps data rate.

in Australia at least the carriers only guarantee the call to 9600bps
the improvement in modem speeds has been finding better ways to utilise the
line, better modulations etc. The compression and error checking runs on top
of that. I've had uploads of 30k/s on a 56k modem (which only connect at
33.6k up stream btw) due to the compression the modem was doing.

> {Original Message removed}

2004\01\20@230818 by M. Adam Davis

flavicon
face
The 56k modem speed is actually a theoretical number arising from the
64K available (in theory) to every phone conversation encoded to digital
form and then back to analog.

To save on the cost of having an extra clock wire for certian portions
of the digital transmission, the last bit of each byte is toggled,
regardless of the actual value of that bit from the A/D conversion.
This enables easy clock recovery for the phone company, and one less
dedicated wire for the clocking circuit.

8 bits at 8kHz is 64kbps on the standard pots line.  7 bits at 8khz is -
tada, 56kbps.

Further, this data rate is only from the phone company to the end user,
the upload speed is at 33.6kbs or less.  The modems used are actually
connected to the digital side of the circuit, rather than on the other
end of a set of a/d and d/a converters.  The modems do some rather
impressive wire and a/d-d/a characterization in the beginning with
various noises to figure out how much of the system they can overcome,
but to really get the data at the full rate they'd have to up the power
levels a bit from the phone company's side.

As far as the other rates below 56k, they were simply what engineers
could reliably attain given a technology, cost limit, and corporate
politics of each time.  In each case except 56k they dictated the wire
speed, and any gains on compression were above that.

Of course, the 56k calculation and bit about the phone companies
throwing away the last bit for clocking purposes is out of a DSP book.
The rest is either rampant speculation on my part, or things I've read
that I now assume I deduced myself.  Read accordingly...

-Adam

James Newton, Host wrote:

{Quote hidden}

--
http://www.piclist.com#nomail Going offline? Don't AutoReply us!
email KILLspamlistservKILLspamspammitvma.mit.edu with SET PICList DIGEST in the body

2004\01\21@003749 by Kenneth Lumia

picon face
The short answer to your question is that the other data
rates such as 12k, 14.4k, 19200 and above fall out of
a different baud rates than 9600 and below.

The longer answer is that bit rates are tied to the bandwidth
used by analog modems (historical).

In older modem technology no error correction bits were
sent with the data. In these products, data rates from
2400 to 9600 used a telephone line bandwidth of 2400Hz.
This allows the following data rates to fall out:

2400 bits per second = 1 bit per baud   (2 point signal constellation)
4800 bits per second = 2 bits per baud (4 point constellation)
7200 bits per second = 3 bits per baud (8 point constellation)
9600 bits per second = 4 bits per baud (16 point constellation)

At the time, due to relatively little horsepower on the processor
side, it was sufficient to simply use a slicer to determine what
the actual received data point equated to.  If the signal-to-noise
ratio was good enough, relatively few errors occured.

As time progressed, there was a desire to increase the data rate
and the SNR performance by adding error correction bits into the
transmitted data.  Of course this required a huge increase in
processing power. (You can search on documents relating to
V.32 and trellis coding for more information.) As I recall, a
14400 bps modem used 7 bits per baud (6 bits of data and
1 error correction bit per baud).  This would result in a
constellation of 2**7 = 128 points, a greatly increased
problem to decode.

As processing power grew, higher bandwidths were used
along with additional error correction techniques up to
28,800 bps line rate. Higher data rates such as 57600
and 115 K were actually done with compression (V.42)
and not by increasing the actual line rate).  It basically amounts
to how much horsepower and how smart we are at any
given point in time.


56K modems (line rate) work slightly differently and can't
be connected end-to-end and require a digital backbone
between them (I'm sort of fuzzy on this and it may be incorrect).

Ken



{Original Message removed}

2004\01\21@005803 by Nate Duehr
face
flavicon
face
On Tuesday, Jan 20, 2004, at 20:29 America/Denver, Jake Anderson wrote:

> my understanding is that standard phone calls run at about 64kbps data
> rate.
>
> in Australia at least the carriers only guarantee the call to 9600bps
> the improvement in modem speeds has been finding better ways to
> utilise the
> line, better modulations etc. The compression and error checking runs
> on top
> of that. I've had uploads of 30k/s on a 56k modem (which only connect
> at
> 33.6k up stream btw) due to the compression the modem was doing.

Welllll... kinda.  This is going to ramble a bit, but ride along for
the fun...

Now you've also got to take into account the history of the telco side
of things.  Analog lines up through the 1970's generally used to run
all the way back to the Central Office... perhaps getting amplifed and
echo-cancelled and all sorts of fun stuff along the way.  (Can you say
Bridge Taps, boys and girls?  Sure I knew you could!  Party Line,
anyone?)

Obviously this wasn't very efficient so Bell Labs came up with some
toys in the 60's to do A/D conversions and accurately cram those into a
single large synchronously-clocked signal in timeslices...
Time-Division Multiplexing was born!  That little thing they called a
"transistor" seems to have helped in this process.  (GRIN)

Early analog to digital stuff gave a standard phone call a 64Kb/s
timeslice in a larger synchronous circuit to cover the mathematical
requirements of Nyquist's Theorem and the desired "usable bandwidth" of
an analog circuit at the time.  Those A/D conversions would be stuffed
into timeslices of a faster synchronously-clocked circuit... the T1.
You'd put 24 X 64Kb/s in... synchronous, clocked to one end or the
other, all that fun stuff.

Other physical challenges awaited the early T1 makers... Framing bits
were "stuffed" in to keep early electronic line repeaters operating,
and in cases when too many 0's were sent in a row, an algorithm called
AMI (Alternate Mark Inversion) was added... this kept the line-powered
repeaters from losing power... yep, they stole power from the "AC" of
the ones and zeros coming down the line... wheee.  And a down loop was
always forced to send "All 1's" (which was actually "alternated" by AMI
so there was always enough power present to keep the repeaters that
weren't chopped out of the loop by a back-hoe... working!)  All-1's is
also known as "Blue Alarm".  Other alarm types were created... "Red
Alarm" was simply that the whole bloody thing had come out of sync and
crashed.  Yellow alarm was more interesting... bits from the audio path
were robbed and set to patterns (All 1's in the second most significant
bit of each 64 Kb/s timesliced frame) was an indication that something
was wrong at the far end, but that the circuit was still up and
synchronized.  (Usually an indication that the CO switch had dropped
ALL calls over the trunk, usually due to an unusually high
bit-error-rate.)

As clock sources got better, less of this bit stuffing needed to happen
as line lengths could driver further and the line-powered repeaters
slowly were removed from the network.  So, the extra framing bits were
stolen to move the end-to-end alarm bits outside of the analog portions
of the signal... and this is Expanded Super-Frame, or ESF.  Another
little trick added to keep things clocked up right, B8ZS (Bit Eight
Zero-Supression) was introduced as the "normal" way to bit-stuff an ESF
T1 or "T-span"... span being leftover AT&T long-lines terminology for
exactly that... huge spans of cable on the early telephone long
distance network.

Anyway, enough about how early T1's worked... Telcos figured out
rapidly that end-subscribers couldn't really tell the difference in
audio quality between a full 64Kb/s A/D conversion and say a 16Kb/s
conversion... especially if they did some tweaking to the power levels
of the mid-range and lows... so as time went on, they figured out how
to multiplex more and more analog sampled stuff into a single T1...
thus saving on trunking costs between Central Offices.  Less cable in
the ground, more money in their pockets.  Good for everyone.

As clock sources and oscillators and everything got better, the
available real bandwidth of an analog phone line to the home from the
home end to the far end, actually went DOWN.

Well... as they say, timing affects the outcome of the raindance.

All of these technologies collided in the Age of Modems (GRIN) when
modem manufacturers who were counting on that "standard" analog line
being able to carry certain frequencies were also competing with (but
most of the time they didn't know it) the local telco who wanted to not
put more copper in the ground in the Outside Plant.

The telco side of this worked its way out to neighborhoods in the form
of the Channel Bank.  Instead of just saving money between Central
Offices, the telco now wanted to run only a small amount of copper wire
all the way from the CO to your neighborhood.  They realized they could
do this by taking the signal to the neighborhood digitally and doing
the A/D conversion closer to your house.

They came up with a device that would take a whole bunch of analog
residential and business circuits and cram them into as few T1's as
possible, thus saving the telco money on putting more copper into the
ground as a neighborhood grew.  Even beyond the Channel Bank, some
devices started implementing protocols like GR-303 where the device in
the "field" would route individual analog lines to the Central Office
ONLY after they had gone to an off-hook state or they had an incoming
call.  The CO Switch and this "smart device" would steal a single 64K
channel to communicate with a serial packet protocol and the smart box
could then know when a call was coming in for one of thousands of
analog lines, and would connect that line to the appropriate channel
number of digital trunk(s) to the CO.  [This means that if all the
people in your neighborhood picked up the phone at the same time,
anywhere from 30-60% of you would NOT get a dial-tone if you live in an
area serviced by one of these boxes - depending on how hard the telco
tweaked the line usage algorithm in the multiplexers and remote
switches.  Some regulatory agencies oversee this percentage and don't
let it get too out of control in the telco's favor.  Ever get a
fast-busy signal IMMEDIATELY upon picking up the phone on a busy
holiday?  You're serviced by one of these gadgets then.]

The modem engineers fought back... smart engineers started coming up
with ingenious plans to use all of that big beautiful original analog
pipe not knowing that what they really had was an analog pipe for three
city blocks crammed through a SLC-96 or similar early model Channel
Bank at which point all of their headroom outside of the standard voice
frequencies was stripped to get rid of problems with good old Mr.
Nyquist's theorem... they created faster and faster modems and the
general population bit.  They wanted speed.  But then they also started
complaining.

"I never get a 56K connection!".  Yep.  And depending on where you
live, that's generally where it stands today.  If you're lucky enough
to be close to a Central Office and have copper that runs from the
analog card in the C.O. switch all the way to your house, you're
probably in the minority these days.  Surprisingly, your chances of
having this are BETTER if you're rural as long as you're not serviced
by an ANALOG CO Switch... not many of those beasts left today, but they
were common in the Western U.S. even into the late 1980's.

Suburbanites in new neighborhoods?  Forget it.  You're getting crammed
through a mux somewhere.

Stuff like V.90 is so sensitive it not only requires the end-user not
be routed through a tight mux, it also really requires the head-end
(ISP) modem bank be fed directly with digital (usually a T1 or in large
deployments a multiplexed DS-3) so the fewest number of A/D conversions
take place.

(The spec calls of ONLY ONE A/D conversion in the ENTIRE path from
end-user to modem pool for maximum performance.  If the ISP uses "V.90
modems" and you take a look in their POP and it's an analog modem bank
with a bunch of RJ11's for connectivity... beware.  You'll never get
full-data-rate out of a connection to it.  Ever.  Not physically
possible.)

Take a breath for some air if you made it this far...

Next, you start to realize this entire telco network is synchronous and
has to be clocked very accurately, and you can see clearly where many
of the wonderful advances we all see today in oscillators, "elastic
buffers" and other fun stuff came from.

You can also see how Europe's E1 standard evolved slightly later than
the T1 and how it was less expensive for telcos in Europe to "just
start with" a broader pipe that was taking advantage of the better
clock sources available at the time.

[Don't even get me started on ISDN... great ideas, way ahead of its
time... died a slow and painful death because it was too expensive to
deploy into the old network.  Oh I do love seeing it relabeled as
"iDSL" these days, though when you're too far out for regular DSL
technology so they take an ISDN chipset and run it in a raw 144Kb/s
data mode with alarming and no signalling and call it "DSL"... heh.
Awesome marketing!]

Boy we're up to geek party time now!  Whoo hoo.  Hey!  Who robbed my
bits!

The "generic" claim from most telcos that they'll only "guarantee" 9600
bps is silly -- none of the technologies they've widely deployed have
ever really stolen so much audio quality from the line in muxing them
down that 9600 is the best the line will do.  But line noise and other
contributing factors made them all confer with their lawyers and come
up with the 9600 bps claims -- it's a Cover Your Ass(ets) type of
thing.  :-)

So the circle of life goes around, and now the telco folks patch in a
DSL DSLAM card into your somewhat beleaguered little analog line and
use up that "overhead" that was always there... all that beautiful
analog copper wire bandwidth that "no one" was using -- is again in
use.  ;-)

Throw in fun like the switch from D4 signalling on T1's to ESF (D1 --
alarm bits are stolen from the second and third most significant bits
in the audio timeslices, ESF the alarm bits are moved out of the audio
frames into a header and trailer frame), etc. etc. etc.  It's all very
"evolutionary", with some of the technologies in analog clashing at the
very end of the timeline of analog telco.

Imagine if you will what happens to a sensitive analog signal like a
56K modem in the early days of such modems when D4 was prevalent... you
rob bits out and set them to ones when they're supposed to be zeros and
things sound a little different on the far end... eh?  Modem doesn't
like that so much.  (Seen it in the lab... not happy at ALL.)

Luckily D4 spans are almost a thing of the past... anyone using them
for trunking anymore should be hung up by their shoelaces and given
twenty lashes with a wet noodle... unless there's some god-awful
outside plant that still has line-powered repeaters somewhere?!  EEEEEK.

Now if I could just get Qwest convinced that I don't NEED analog telco
service to have DSL services on the line, and I could switch to using
something like Vonage (http://www.vonage.com) for any analog phone line
desires I have... life would be good.  About time to fire up the pen
and scribble off a note to the local Public Utilities Commission
stating that Qwest's rule that one must have dial-tone to have DSL is
outdated and holding back proper competition in the local-dial-tone
market!  :-)

Ain't all this stuff FUN?  ;-)

Telco geek for many years turned Unix geek, but still love telco as
it's such a cool "natural" progression of technology for 30 years...

Nate Duehr, RemoveMEnateTakeThisOuTspamnatetech.com

--
http://www.piclist.com hint: To leave the PICList
spamBeGonepiclist-unsubscribe-requestspamBeGonespammitvma.mit.edu

2004\01\21@041930 by Win Wiencke

flavicon
face
Thanks Nate Duehr for a wonderful rant!

People like you are what make the PIC list exceptional.

Win Wiencke

--
http://www.piclist.com hint: To leave the PICList
TakeThisOuTpiclist-unsubscribe-requestEraseMEspamspam_OUTmitvma.mit.edu

2004\01\21@072733 by Dave Tweed

face
flavicon
face
"M. Adam Davis" <RemoveMEadampicspamTakeThisOuTUBASICS.COM> wrote:
> To save on the cost of having an extra clock wire for certian portions
> of the digital transmission, the last bit of each byte is toggled,
> regardless of the actual value of that bit from the A/D conversion.
> This enables easy clock recovery for the phone company, and one less
> dedicated wire for the clocking circuit.

> Of course, the 56k calculation and bit about the phone companies
> throwing away the last bit for clocking purposes is out of a DSP book.

Unfortunately, it's dead wrong. Telcos have *never* used this method of
clocking data over digital lines at any rate.

In North America and other countries using the T1 standard, each channel
really has 64 kbps of data (8 ksps x 8 bits per sample) devoted to it.

However, older equipment using in-band signaling will "rob" (overwrite)
bit 7 once every 6 or 12 samples, in order to indicate ringing / off-hook
status at each end. A signal that passes through multiple such links in
tandem may lose bit 7 in additional samples, because tandem links do not
necessarily synchronize at the "multiframe" level, which would also
synchronize the bit-robbing. As a result, you can only assume that you've
got only 7 usable bits per sample, or 56 kbps.

Nate Duehr <nateEraseMEspam.....NATETECH.COM> wrote:
> Ain't all this stuff FUN?  ;-)
>
> Telco geek for many years turned Unix geek, but still love telco as
> it's such a cool "natural" progression of technology for 30 years...

Yes, it is. While the general concepts of your narrative were mostly
accurate, the details were wrong in almost every respect. I'm speaking
as a recent (up until 2002) designer of T1/E1 terminal multiplexer
equipment.

For example, repeaters were never "powered by the signal". The
ones density requirement is related entirely to maintaining clock
synchronization. AMI doesn't help create ones density, but bit-8
stuffing and B8ZS do.

And so on. I don't have time to address all of your points.

-- Dave Tweed

--
http://www.piclist.com hint: To leave the PICList
EraseMEpiclist-unsubscribe-requestspammitvma.mit.edu

2004\01\21@084526 by Anthony Toft

flavicon
face
Nate,

Thanks for the history technology lesson.

I was in the dial-up ISP business at the dawn of the 56k age I knew of
the limitations (one A/D conversion in the line etc) but now I know why.

Thanks again...

Anthony

--
http://www.piclist.com hint: To leave the PICList
RemoveMEpiclist-unsubscribe-requestEraseMEspamEraseMEmitvma.mit.edu

2004\01\21@092612 by Dan Oelke

flavicon
face
Wow  - I'm impressed to see at least 2 other people on this list who do
have some pretty good idea of how the phone stuff works.  I also have
been a designer or phone equipment - T1's for TR-8 and GR303 up to
10Gbps Sonet stuff.

Thank you to Nate for the wonderfully full description!

I disagree on the uses of the AC signal for powering the repeaters, as
everything I have see has network power of up to 130V DC power on a
separate pair of wires to the repeater.

I'll also disagree with Dave's comment that it is "older" equipment that
robs the least significant bit on every 64Kbps line.  Every GR303 is
pretty much as modern as you can get for digital lines out towards the
subscriber and it uses ESF which robs 1 least significant bit out of
every 6 frames.  Granted GR-303 has been in use for 15+ years (can't
find my copy right now) but there isn't anything newer.  He is right
though - a lot of equipment is not multi-framed aligned - partially
because trying to do it would add a lot of latency to the connection,
and partially because equipment manufacturers sometimes take the easy
way out and just don't worry about it.

Those "disagreements" about some pretty trivial matters aside thank you
to both of you for some pretty wonderful write-ups.

Dan

Dave Tweed wrote:

{Quote hidden}

--
http://www.piclist.com hint: To leave the PICList
RemoveMEpiclist-unsubscribe-requestKILLspamspammitvma.mit.edu

2004\01\21@130808 by Robert Mash

flavicon
face
Thanks!
That was very informative.
Bob Mash


----- Original Message -----
From: "Nate Duehr" <nateSTOPspamspamspam_OUTNATETECH.COM>
To: <spamBeGonePICLISTSTOPspamspamEraseMEMITVMA.MIT.EDU>
Sent: Wednesday, January 21, 2004 12:57 AM
Subject: Re: [EE] Why don't baud rates just double?


{Quote hidden}

--
http://www.piclist.com hint: To leave the PICList
@spam@piclist-unsubscribe-request@spam@spamspam_OUTmitvma.mit.edu

2004\01\21@143958 by Dwayne Reid

flavicon
face
At 05:17 PM 1/20/2004, James Newton, Host wrote:
>Why is each higher baud rate twice as fast until 14.4K which is only 1.5
>times faster than 9600?
>
>115,200
>  57,600
>  28,800
>  14,400
>     ?
>   9,600
>   4,800
>   2,400
>   1,200
>     600
>     300

You forgot 75 & 150 <grin>.  Also 19200 & 38400.

You sparked an interesting discussion but I *think* that you are actually
talking about 2 different issues: standard baud rates that a UART can use,
and connect speeds that a modem might use.  The UART issue is simple -
standard baud rate generators as used in most PCs only have a certain
number of divisors that can be used when dividing the crystal rate down to
the clock input of the UART.

I could be wrong on the following (probably am - but hopefully someone else
can correct my mistakes) - I *think* that the 2 'odd-ball' baud rates of
57600 & 115200 come from using a divide by 3 tap in the baud rate generator
instead of only using powers of 2.

Others have covered the reasons for the different modem connect speeds over
TELCO lines.  One thing to remember: the modem communicates to the PC using
one of the standard baud rates only: 300, 600, 1200, 2400, 4800, 9600,
19200, 38400, 57600, 115200.

dwayne

--
Dwayne Reid   <spamBeGonedwaynerspamKILLspamplanet.eon.net>
Trinity Electronics Systems Ltd    Edmonton, AB, CANADA
(780) 489-3199 voice          (780) 487-6397 fax

Celebrating 19 years of Engineering Innovation (1984 - 2003)
 .-.   .-.   .-.   .-.   .-.   .-.   .-.   .-.   .-.   .-
    `-'   `-'   `-'   `-'   `-'   `-'   `-'   `-'   `-'
Do NOT send unsolicited commercial email to this email address.
This message neither grants consent to receive unsolicited
commercial email nor is intended to solicit commercial email.

--
http://www.piclist.com hint: To leave the PICList
.....piclist-unsubscribe-requestspam_OUTspammitvma.mit.edu

2004\01\21@154603 by Nate Duehr

face
flavicon
face
Dave Tweed wrote:

{Quote hidden}

Well, I had a mentor who claimed to have "been there and seen it", I'm
not old enough to remember the 60's.  He had rather true-sounding
stories of having to jump start such devices with batteries, after some
span failures.  His recollection of this stuff is clear back to early
Bell Labs tests of these types of circuits, but I can't verify his
accounts personally in any way.  I do understand your point also, and
it's been a long long time since I had to deal with T1's on a daily
basis, so memory is a bit fuzzy.

>And so on. I don't have time to address all of your points.
>
>
Well here's hoping it at least gives a "feel" for the general evolution
of the technology.  I certainly had hoped it was more accurate than I
guess it was from old grey-matter, and I'm operating on about 8 hours of
sleep in 72 hours total right now, so maybe during a more awake state
I'd have explained things better.  Generally the intent was to remind
folks that there's a *whole lot* more going on when dealing with modems
and analog lines between the wall jack and the CO than meets the eye in
most modern neighborhoods.

I really would be interested in seeing how your more practiced
professional descriptions would go from my rambling rant of a (I guess,
bad...) history lesson.  If nothing else for posterity's sake so the
PICList archives have "the Right Stuff" in them.  ;-)

Oh well... back to banging head on keyboard wondering why HP-UX has to
do things differently than most Unix's just to confuse the sysadmin...

Nate Duehr, TakeThisOuTnateKILLspamspamspamnatetech.com

p.s. I hope I've figured out how to turn off HTML mail in the new
Mozilla Thunderbird I'm playing with here... the developers in their
infinite wisdom also appear to have moved all of the usual controls for
that from places they've been since Netscape 4.x ... user-hostile
interface design, indeed!

--
http://www.piclist.com hint: To leave the PICList
.....piclist-unsubscribe-requestspamRemoveMEmitvma.mit.edu

2004\01\21@205438 by Lee Jones

flavicon
face
> Thank you to Nate for the wonderfully full description!

> I disagree on the uses of the AC signal for powering the repeaters,
> as everything I have see has network power of up to 130V DC power
> on a separate pair of wires to the repeater.

I've also always seen the repeaters powered by 130-200VDC on the
wire.  This is partly why telco linemen will actually go to the
trouble of "protectoring" the pairs (putting the red plastic covers
on the punch-down positions) -- it does bad things to your test set
if you are looking for dialtone and clip onto a T1.  I also try to
not ground myself (i.e. leaning on a pipe) while probing a frame.

>>> Of course, the 56k calculation and bit about the phone companies
>>> throwing away the last bit for clocking purposes is out of a DSP book.

>> Unfortunately, it's dead wrong. Telcos have *never* used this method of
>> clocking data over digital lines at any rate.
>>
>> In North America and other countries using the T1 standard, each channel
>> really has 64 kbps of data (8 ksps x 8 bits per sample) devoted to it.

And such a 64kbps channel is a DS0 standard.  24 DS0s multiplexed
together make 1 DS1 (which is commonly carried on a T1 physical
interface).  [And 30 DS0s make a European E1 capacity circuit.]
30 DS1s make a DS3 which is usually carried on copper as a T3.

T1 speed of 1,544,000 bits/second is 24 * 64,000bps + 1 * 8000 bps
administrative overhead.

As Dave said, any digital equipment using a channel can only rely on
having 7 bits per sample (x 8000 samples/second = 56kbps) because of
low order bits "robbed" for in-band signalling.  It made no difference
when the telco used the digitized channels to carry voice -- people
just couldn't notice it.

If you need all 8 bits, you need to ensure your contract specifies
that the channel(s) is 8-bit clear (or similar).  It can be done,
but the telco frequently won't guarantee such service for free.

DS0s get reused a lot.  For example, ISDN uses DS0 for each of the
two B (bearer) channels plus 16,000bps for the D (data, signalling)
channel.  Thus ISDN's 144,000 bps raw on-wire data rate.


And since I've bothered to speak up...

{Quote hidden}

You left out 19,200 and 38,400.  Both were commonly used on serial
links long before modems could go anywhere near that fast.  These
are even multiples of 9,600.

And I don't recall 14,400 or 28,800 as being serial link speeds.
They were modem speeds and had to do with the number of bits per
second that could be packed into the number of signal states per
second (i.e. baud) that were available on that link.  Telecomm life
really was simpler when 1 baud == 1 bit/second. :-)

Running the serial link faster than the modem rate because common
once sufficient compute power and buffer memory was available in the
modem to do transparent data compression "on the fly" -- even before
modem speeds exceeded 2400bps.  I recall Micom's MNP protocols to
do inter-modem compression.

Another old reason to run the serial link faster than the line speed
was synchronous modems.  A modem that, on a dedicated 1 or 2 analog
pair, cost about $1 per 1 bit/second (per modem).  Yes, boys & girls,
that was a couple thousand dollars per 2400 bps modem.

Synchronous modems usually used 8 bits per character (width varied,
but I'll assume 8 to simplify this discussion).  The modem provided
seperate data & clock signals on its serial interface.  Concentrator
handled framing, error correction, and (given the chronologic time)
was usually built with many PC boards stuffed with TTL.

The serial link from the concentrator/controller to end devices was
asynchronous at 10 bits/char.  If you ran the async and sync sides
at the same rate -- say 2400bps -- then the sync side had 300 char
per sec while the async side could only transfer 240 char/sec.  At
this time, buffer memory was very expensive.  This is why the link
protocols also provided flow control (usually as a side benefit).

                                               Lee Jones

--
http://www.piclist.com hint: To leave the PICList
RemoveMEpiclist-unsubscribe-requestspamspamBeGonemitvma.mit.edu

2004\01\21@210629 by Jack Smith

picon face
>And such a 64kbps channel is a DS0 standard.  24 DS0s multiplexed
>together make 1 DS1 (which is commonly carried on a T1 physical
>interface).  [And 30 DS0s make a European E1 capacity circuit.]
>30 DS1s make a DS3 which is usually carried on copper as a T3.


Isn't it 32 DS0 streams for an E1 (2.048Mb/s), with one or two (there is
some variation from country to country) of the DS0's reserved for signaling?

As a user of a rented E1, you can get 30 slots, and in some countries 31
slots.

Jack Smith

--
http://www.piclist.com hint: To leave the PICList
spamBeGonepiclist-unsubscribe-request@spam@spamspam_OUTmitvma.mit.edu

2004\01\22@023136 by Lee Jones

flavicon
face
>> And such a 64kbps channel is a DS0 standard.  24 DS0s multiplexed
>> together make 1 DS1 (which is commonly carried on a T1 physical
>> interface).  [And 30 DS0s make a European E1 capacity circuit.]
>> 30 DS1s make a DS3 which is usually carried on copper as a T3.

> Isn't it 32 DS0 streams for an E1 (2.048Mb/s), with one or two
> (there is some variation from country to country) of the DS0's
> reserved for signaling?  As a user of a rented E1, you can get
> 30 slots, and in some countries 31 slots.

My mistake; I believe you are correct that an E1 is 2.048Mbps and
would be 32 x DS0.  Thanks for correcting my error.

                                               Lee Jones

--
http://www.piclist.com hint: PICList Posts must start with ONE topic:
[PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads

2004\01\22@114440 by Dipperstein, Michael

face picon face
> > I disagree on the uses of the AC signal for powering the repeaters,
> > as everything I have see has network power of up to 130V DC power
> > on a separate pair of wires to the repeater.
>
> I've also always seen the repeaters powered by 130-200VDC on the
> wire.  This is partly why telco linemen will actually go to the
> trouble of "protectoring" the pairs (putting the red plastic covers
> on the punch-down positions) -- it does bad things to your test set
> if you are looking for dialtone and clip onto a T1.  I also try to
> not ground myself (i.e. leaning on a pipe) while probing a frame.

Without trying to sound too much like a commercial, I've seen ISDN repeaters
that run off of 210V DC nominal, I'm sure max is higher.  130V DC nominal is all
over the place.  There's even a Bellcore (Telcordia) standard for the acceptable
range of voltages used to power 130V DC devices.

There's at least one company that has a line Multiplexer that requires 340V DC
(maybe it's 320V DC).  I don't think this device is currently deployed in North
America, but one of the major phone companies was considering it and wanted to
be sure that our test sets didn't get damaged by it.

Most modern test sets are designed so that they can handle these high voltages.
Our latest models recognize the voltages (as well as T1 data), and will alarm
instead of drawing dialtone.  Other manufactures have similar protections in
their test sets.

-Mike

--
http://www.piclist.com hint: PICList Posts must start with ONE topic:
[PIC]:,[SX]:,[AVR]: ->uP ONLY! [EE]:,[OT]: ->Other [BUY]:,[AD]: ->Ads

More... (looser matching)
- Last day of these posts
- In 2004 , 2005 only
- Today
- New search...