Just one small note before I begin: I have tried to get Blogger/Blogspot several times to make hyperlinked footnotes for me (using the HTML override editor), without much success. If I leave the fragment identifier alone and leave Blogger's "correction" of my hyperlink to the same document, clicking on it tries to bring the reader/surfer to an edit page for this post (and of course noone but myself can edit, so it errors). If I remove everything but the fragment identifier as many Web pages on the footnotes topic suggest, for some reason my browser won't follow them. I apologize for the inconvenience of the reader needing to scroll to near the end of the post in order to read the footnotes.
Myth: Netflix (or any other streaming content provider) is being throttled (or blocked) to make incumbents' video services seem attractive in comparison to streaming.
Truth: No, they are not being throttled per se. So passing some sort of neutrality regulation or law likely won't help much, if at all.
When people talk about "network throttling," (also called "traffic shaping") they are talking about classifying network traffic, either by protocol (such as TCP plus a port number), or by source or destination IP address, and programming one or more intervening routers to allow only some limited count of packets to pass through for some given amount of time (e.g., packets amounting to 200 kilobytes per second). The source (e.g. Netflix) compensates for this apparent lack of bandwidth by slowing down the rate at which it transmits those packets. Thus the viewing quality is downgraded by switching to a more highly compressed version of the video, and might also result in stutters and stops for rebuffering.
While a certain amount of this may be being done by some ISPs, current thinking is that this is unlikely. What is more likely happening is the incumbent ISPs are unwilling to increase interconnect capacity. If the current interconnects between Netflix's ISP(s) and the incumbents is run up to capacity, naturally no more streams of the same bitrate can be passed through. Everything must be slowed down.
The analogy of the Internet to pipes/plumbing is not too bad. If some pipe is engineered to deliver 600 liters per minute at 500 kilopascals, you can't just increase the pressure to try to stuff more water per minute through, the pipe will burst and would no longer be usable. One solution would be to install a similar capacity pipe in parallel with the original, thus increasing the effective cross sectional area of pipe, which would be analogous to installing another (fiber) link between routers. The other way of course is to increase the pipe diameter, which would correspond to using existing media such as fiber in a different way (such as using more simltaneous laser wavelengths or something).
As implied, incumbents such as Comcast, Time Warner (TWC), Cox, etc. have economic disincentives to upgrade their peering capacities with services which compete with their video services. So all they have to do is let existing links saturate with traffic, and it appears to the consumer like their streaming provider is being throttled. Those consumers are of course in no position to be able to investigate whether their streaming service (e.g. Netflix) is being traffic shaped or if peering points are being allowed to saturate. Only their incumbent's network engineers would have the access to the equipment and know-how to be able to answer that question accurately.
I personally cannot envision regulation which would likewise be able to discriminate between administrative traffic shaping and interconnect saturation. In my libertarian opinion, any attempt to do so would be government takeover of Internet service, because that would mandate how ISPs must allocate resources to run their core business. With government takeover would come government control, and this could quickly be used to suppress essential freedoms such as those expressed in Amendment I.
Myth: The passage of 'Net Neutrality would lead to a government takeover of the Internet where connection to the 'Net will require a license.
Truth: Take off the tin foil hat, that is simply preposterous. There is noone (in government) I know of which is even proposing such a thing, not FCC commissioners, not legislators; noone.
This one seems to be a favorite fear mongering tactic of Glenn Beck. Oh, believe me, I like Glenn (plus Pat and "Stu"), and agree with a lot of their views. But this one is so far out in Yankee Stadium left field that they'd be in the Atlantic Ocean. I understand the rough route how this conclusion could be reached.
The FCC required licensing of radio stations for very sound technical reasons. You just simply cannot rationally allow anyone to transmit on any frequency with any amount of power they can afford. He said at one time radio stations used to be akin to that, with people broadcasting to their neighborhoods. This will very quickly become untenable, with the next person claiming the freedom to do as they please with available spectrum, and trying to overpower their competition. It's simply not within the laws of radio physics to be able to operate that way. There needs to be some central authority to say you, you over there, you may transmit on this frequency with this modulation, this bandwidth (basically, modulating frequency or FM deviation), at this power, and you can be reasonably guaranteed you will have clear transmission of your content to this geographic region.
A weak analogy would be vehicular traffic. You simply cannot drive wherever you want, whenever you want, at whatever speed you want. You must drive on either your own property or established rights of way (roads, basically), yield to other traffic pursuant to law, and at speeds deemed safe by authorities (guided by engineers who designed the roadways). You also must prove competency with these precepts, plus physical ability, thus requiring licensure.
I didn't think I would hear it from radio veteran Rush Limbaugh, but as I was writing this (over a few days), I also heard him propose that the FCC might require licenses for Web sites to come online, comparing it to radio stations needing licensing. His thesis is that radio stations must prove to the FCC that they're serving the community in which they're broadcasting. That's true. Why would the FCC grant exclusive use of a channel1 to a station which wasn't broadcasting content in which the community was interested? It cannot allocate that channel to more than one transmitting entity.
Even reclassifying ISPs under Title II will do no such thing. Telephone companies have been regulated under Title II for a very long time, and have you ever heard of ANYONE suggesting needing a license to connect to the phone network?? NO, NEVER. Yes, it is true, for a long, long time, the Bell System insisted it was technically necessary to have only their own equipment connected to the network. This was modified eventually to type acceptance/certification.2 Still, it's an open, nonprejudicial process. If for some oddball reason the FCC suddenly decided to synthesize some objection to a device being attached to phone lines in some sort of effort to interfere in a company's commerce, the manufacturer could sue in court and plead their case that their device does indeed meet all technical requirements, and the FCC is merely being arbitrary. But the point is, this is not licensing individual users, only the manufacturers, and purely for technical reasons.
Myth: The phone company (or companies) has/have not made any substantial improvements (except maybe Touch-Tone) in almost a century (since the Bell System).
Truth: There have been plenty of innovations during the "reign of," and since, the Bell System.
Times used to be that trunking between cities had a one-to-one correspondence between cable pairs and conversations. If we were alive back in the 1920 or so, if I wanted to talk from here in Cheektowaga, NY to my sister in Pasadena, TX, a pair of wires along the complete path from phone switch to phone switch along the way would have to be reserved for the duration of the conversation.
The first innovation was frequency division multiplexing (FDM). Phone conversations are limited from approximately 300 Hz to 3000 Hz response, which is quite adequate for speech. What you could do is carry several on modulated carriers, much like radio does, but closer to the audio spectrum, for example 10 KHz, 20 KHz, 30 KHz, etc. (not sure of the exact frequencies used).
Starting in the 1960s, after solid state computer technology was well developed, interconnections (trunking) between switching centers was upgraded from analog to digital. This is how the modern public switched telephone network (PSTN) works. Instead of transmitting voice or FDM voice on the links, the voice is digitized (analog to digital conversion, or ADC), and sent a sample at a time down the trunk, and interleaving several conversations digitally (time division multiplexing, or TDM).
Digitizing the phone system also had the benefit of getting rid of the mechanical switching. It used to be that when you dialled the phone, while the dial was returning to its "home" position, it would send a pulse down the line for each number it passed. So for example a 1 sent one pulse, 2 sent two pulses, and so on, with the exception of 0 which sent ten pulses. Every pulse would physically advance (step) a rotary switch, and the timeout between digits would pass the pulsing along to the next rotary switch. See Strowger switch and crossbar switch. With the advent of TDM, each subscriber could have a timeslot in their local switch, and each current conversation between switches (over trunks) could have a timeslot on the trunk. Instead of physically carrying voice around a switch, it was a matter of transferring data instead.
And then of course, there was the mentioned Touch-Tone (a trademark for the dual tone multiple frequency (DTMF) system). This meant instead of pulses stepping a mechanical switch of some kind, the subscriber's telephone would generate carefully engineered tones which would be detected by the central office (CO) switch, and use a computer program to collect those digits and figure out and allocate a route to the destination subscriber.
Of course, then there was the Bell System breakup in the 1980s. The effect of this was increased competition and therefore falling prices. I would think anyone old enough (I was born in 1965) remembers when it used to be quite expensive to call long distance, even as short a distance as here to Niagara Falls. Instead of a local call where someone could talk for as long as they'd like for one flat fee per month, it was charged on a per-minute basis, and the per-minute price varied based on load/demand, with daytime, evening, and overnight rates. So every once in a while, one would stay up late (past 11PM local time) to call the relatives who moved away (in my case, siblings who went to Texas, and a grandmother who lived in Closter, New Jersey). These days, due to a couple of decades of competition, long distance is included with most (although not all) flat rate priced telephone service.
Let's also not forget the development of mobile telephony. At first, it was only the very rich who had phones installed in their vehicles. Then a few decades later there were the pack phones which were a lot more affordable that I used to sell in the late eighties when I worked at Radio Shack. Then after that, there were the famous bricks. Then the mobile phone system itself was digitized, with CDMA and GSM. Then of course some of the CDMA and GSM bandwidth was dedicated to carrying arbitrary data, not just voice data. And these days, we have the next evolution in mobile communications, LTE.
So don't try to tell me there hasn't been innovation and improvement in decades. To a certain extent, even with the lack of competition before the 1980s breakup, improvements were slowly but surely made. Don't get me wrong, money was a large motivating factor (e.g., we can't charge for our long distance service because we don't have the capacity, people just won't buy it; so we have to innovate how we carry those conversations around in order to sell the service). But the key is competition since the breakup greatly accelerated innovation, because it was no longer a matter of the monopoly of getting around to it when the company felt like it, they have to innovate faster than the competition or they'll lose customers.
Myth: Netflix (or other streaming content providers) are being extorted into paying more in order to have a "fast lane" to their customers by the likes of Comcast.
Truth: Settlement-free peering does not apply; Netflix or their ISP is a customer and not a peer. Nor should they be afforded free colocation. 'Net Neutrality should have no effect whatsoever on these private business relationships.
There is a concept of "settlement-free peering" in the ISP business. At least the current business model of ISPs is to charge to move octets from their customers to either another one of their customers or to one of their peers which services the customer on the other end of the communication. This is important and key; they charge for moving packets. Let's take two well-known ISPs to illustrate this, Verizon and Comcast. Hypothetically, Verizon could charge Comcast for all bytes delivered from Verizon's customers to Comcast, and for all bytes delivered from Comcast to Verizon's customers. But that could equally be said of Comcast; Comcast could charge for all bytes moved to/from Verizon to/from Comcast's customers. But that would be kind of silly to simply pass money back and forth, and it makes more economic and accounting sense simply to say, if you accept roughly the same amount of bytes as you give us, we'll call it a wash if you do the same, and we consider each other peers because of this.
Poor Cogent. They at least at one time had the unenviable position of being the ISP to Netflix. At the time Netflix was just starting up, they likely had all sorts of customers which would be sucking down bytes from the Internet, roughly in proportion to the bytes being sucked out of their customers over to their peers. But at some time Netflix started becoming a larger and larger proportion of Cogent's outbound traffic, eventually dwarfing the inbound quantity. The tenet upon which settlement-free peering, that you give as much as you get, increasingly no longer applied. So at least according to generally accepted practice, Cogent (or indeed any other 2nd or lower tier ISP) became a customer to their "peers" instead of a true peer. Rightfully so, at least according to common practice, Cogent, and I suppose later Netflix themselves, began to have to pay for moving those bytes because they were no longer really peers.
Now what do you suppose 'Net Neutrality would do if it mandates that there is to be no such "paying extra for a fast lane to the Internet?" I contend that would essentially abolish settlement-free peering, and regardless of relative traffic, every ISP will have to charge every other ISP with which they peer for traffic. This will incrementally increase the costs of doing business, because now there is that much more accounts payable and accounts receivable, with the same potential hassles of nonpayment/late payment and other such miscellaneous wrangling. And ISPs aren't going to "take it lying down," they'll just increase rates to their customers to cover these additional costs.
OK, you say, you don't have to deliver the content through ever more saturated wide-area network (WAN) links, simply establish a content distribution network (CDN), whereby you plop down servers with your content in the network operation centers (NOCs) of all the major ISPs. Putting up and maintaining WAN connections is more costly than providing some ports on an Ethernet switch, plus that Ethernet switch is going to be tons faster than any WAN link, so it's a win for the ISPs and the ISPs customers, and consequently, for the content provider's customers too. This could also be called a colocation (colo) arrangement. The content provider (e.g., Netflix) transmits the necessary content to its CDN servers, potentially over the WAN, once, and that content is accessed thousands or even maybe millions of times, thus in effect multiplying the "capacity" of that WAN link.
But the truth is, although providing Ethernet ports is lots cheaper, it certainly isn't free; the ISP's engineers must install such a switch, upgrade it as necessary, help troubleshoot problems which arise, and so on...definitely not even near zero costs. Plus there is power, cooling/environmentals, isolation (wouldn't want a fault in the CDN server to take down the ISP's network), security both for network and for physical access (repairs to the CDN servers when necessary), and all the various and sundry things which must be done for a colocation. Again, this is not just some minor expense which could be classified as miscellaneous overhead. They are real and substantial costs.
So no matter how you slice it, Netflix and their ilk must pay the ISPs. If they choose, they can be a customer with ordinary, conventional WAN links, or they could also opt for colo. Either way, they owe. It's not extortion in the least. They pay for the services they receive, one way or another. They might try to frame their plight some other way, but anyone can research what's happening and draw conclusions independently. I don't think they're being extorted into paying "extra" at all. As long as they're being charged the same as any other ISP customer to move the same approximate quantity of bytes, or the same as any other colo customer, it's totally fair.
I have to ask, why is it the only company I heard of or read of making a "stink" like this is Netflix? Why not Google because of YouTube? Why not Hulu? Why not Amazon for their Instant Video product? I suspect Netflix is looking for a P.R. angle, and want to lobby the public to get angry at their ISP for their current business practices as outlined in this section.
Myth: 'Net Neutrality is not needed because of competition. If you don't like your ISP, you can just switch. Problem solved; the other ISP should have more favorable policies or prices because they're competing for customers.
Truth: It is true that no new regulation is needed for what I'll term the Internet core. Competition there is fine. The last kilometer (or last mile if you must), or the edge, is the problem area however.
The reclassification by the FCC of ISPs under Title II of the Communications Act of 1934 from an information service to a communication service on its surface makes at least some sense. When the FCC proposed some 'Net Neutrality rules, ISPs took them to court (and won), saying their regulations could not be binding because the FCC lacked authority. However, if they can be successfully reclassified as common carriers under Title II, that would give the FCC (or at least the federal government, maybe the ICC) the authority to impose such regulations. It would also absolve ISPs of certain liabilities due to their common carrier status. (As an example, if two thieves coordinate their efforts via phone calls, the phone company is not an accessory to the theft simply because their network was used to facilitate the theft, some other complicity would need to be demonstrated.) But there's a couple of disturbing things about this.
First is the track record of the the federal government. As this essay that the Cato Institute (PDF) relates:
The telephone monopoly, however, has been anything but natural. Overlooked in the textbooks is the extent to which federal and state governmental actions throughout this century helped build the AT&T or “Bell system” monopoly. As Robert Crandall (1991: 41) noted, “Despite the popular belief that the telephone network is a natural monopoly, the AT&T monopoly survived until the 1980s not because of its naturalness but because of overt government policy.And a little further on, there is this other notable passage:
After seventeen years of monopoly, the United States had a limited telephone system of 270,000 phones concentrated in the centers of the cities, with service generally unavailable in the outlying areas. After thirteen years of competition, the United States had an extensive system of six million telephones, almost evenly divided between Bell and the independents, with service available practically anywhere in the country.The referenced monopoly would be to that created by Bell's patents in the mid to late part of the 19th Century. I'm not against patents; inventors and innovators need the assurance that their work, for some limited amount of time, will provide a return on the investment made. (The current patent system is in need of overhaul though. Some of its applications, even those granted, are ridiculous.) Once again, it is proven that competition improves service and lowers prices. The government though seems to have been in the Bell System's pocket, thus squeezing out competitors and strengthening their monopoly. The AT&T folks convinced regulators that their size and expertise made them ideal to establish universal phone service, and that needing to accommodate competitors' systems would only slow them down and was duplicated effort; that having one, unified system would be the best. They falsely argued telephone service was a natural monopoly, despite evidence quite to the contrary in the late 19th and early 20th Centuries.
Second, the FCC claim they will exercise discretion in what parts of Title II they will bring to bear on regulation. Ummmm....sure....if you like your ISP, you can keep your ISP. These new regulations will save most folks $2500 per year on their ISP service fees. There's nothing to stop the FCC from using every last sentence in it. Title II has all sorts of rate-setting language in it. Much as I don't want to spend more than I really have to, I don't trust the government to turn the right knobs in the right ways to lower my ISP bill. Look at what happened to banking regulation under Dodd-Frank. Sure, banks complied with all the provisions, but because that cut into the profits they were making, they simply began charging other fees, or raising others not covered by the new regulations.
I guess it also goes without saying that anytime government inserts itself to control something else, there is ample potential for abuse of that power, such that, yet again, essential liberties are curtailed.
Yes, on occasion, regulation has served some public good. If it weren't for the 1980s divestiture, we'd probably still be paying high, per-minute long distance rates, and have differing peak and off-peak rates. But the government has propped up the telco monopoly for decades, and has attempted to compensate for their missteps. For example, the Telecommunications Act of 1996 amended the 1934 Act in several ways, probably the most apropos to this discussion would be forcing telcos to share their infrastructure (the "unbundled network elements") so that competing companies could lease out CO space for switches and lease the lines to customers...thus creating competitive local exchange carriers (CLECs).
On the flip side of that though, just as video providers who also have roles as ISPs have an economic disincentive to carry Internet video streams unencumbered, so too the ILECs would seem to have disincentive to help CLECs. If some CLEC customer's loop needs repair, from an operational/managerial standpoint, why should it come before their own customers' repair needs? I can't say for sure, but it did not seem like Verizon were trying their best to help Northpoint when I had Telocity DSL. Sure, I would have a few weeks or a few days of good service, but then I would have trouble maintaining sync for many minutes or sometimes hours. Wouldn't you know? After Telocity and Northpoint went out of business, and I transitioned to Verizon Online DSL, things were pretty much rock solid. (OK, maybe the comparison isn't quite exactly equal. Telocity was 768K SDSL, whereas Verizon was 640/90 G.dmt ADSL.)
One thing I distinctly heard Pat Gray and Steve "Stu" Burguiere say when Glenn was out is that if you don't like your ISP, you can just switch. The reality is though that in the same price range, people usually have only two "last kilometer" choices, the telephone company (telco) or the cable TV company (cableco). And for some, it's down to one choice. I wouldn't count cellular Internet access because noone offers a flat rate, so-called unlimited plan. The industry norm is around 5 GB for $100/month, which is more than I'm paying ($58) for 15/1 Mbps but unlimited. Someone could chew through 5 GB in only a few hours of HD streaming video viewing, with huge overage fees after that. Satellite likewise is most decidedly not competition either. It's at least twice as expensive for the same bitrates, is limited by a fair access policy (you're throttled after so many minutes of full usage), and has latency which makes several Internet applications (VOIP for example) totally impractical to impossible to use. There are a few wireless providers (WISPs), but again their price points are very high compared to DOCSIS, DSL, or fiber, and are mostly only for rural areas underserved by either a cableco or telco. No other technology is widely available. Put very simply, there are one, maybe two economically viable choices; an oligopoly. Sorry, Pat and Stu, you're way off base here. It's not like drugs, where I can go to the pharmacy at K-Mart, or at Walmart, or at Walgreens, or at RiteAid, or at CVS, or at Tops, or at Wegman's, or... That has a sufficient number of competitors to really be called "competition."
Competition (lack of an oligopoly) is essential to improving service and lowering prices. Look at what happened in the AT&T Mobility/T-Mobile USA case. Four competitors for mobile phone service (AT&T Mobility, Verizon Wireless, T-Mobile USA, and Sprint) is barely enough competition, if that. Going down to three would have made the competition landscape worse; that's why the merger was denied (and is why the Comcast/Time Warner merger should not be allowed either). Everywhere Google Fiber goes, the incumbents lower prices to try to retain customers.
So would forcing telcos and cablecos to share their infrastruture help foster competition? I'm unsure. It sounds good, because it's unlikely people would tolerate many additional sets of cables on their utility poles which would be necessary to provide alternate services. It'd be really ugly. Plus the incumbents have a several decades head start. Laying new cable is expensive and will require lots and lots of time. So maybe cable and office sharing like is done with phone service might help.
One thing the FCC (or some other part of the federal government) could do is to annul and void any local laws which limit or even outlaw competition. Such is the case with some municipalities wanting to string their own fiber optic networks. The whole reason they even consider it is, the telcos and cablecos refuse to provide the level of service the community seeks. But no, instead of upping their game and meeting the challenge, once again the telcos and cablecos use their lobbyists to enact laws making muni broadband illegal.
Of course, this isn't really 'Net Neutrality. The real issue is a need for increases in local competition.
Myth: If it's written into 'Net Neutrality that extra charges for a "fast lane" will be disallowed, my Internet bill will be kept lower.
Truth: Practically any way it's worded, it won't preclude the possibility of either speeding up or slowing down some class of traffic.
I'll bet dollars to donuts some "propeller head" is going to look at 'Net Neutrality regulations, see that no throttling (or its inverse, preferential packet treatment) is allowed, and that charging extra for a "fast lane" will not be allowed, then sue their ISP claiming they don't have the 100Mbps tier, they only have the 15Mbps tier, and they're illegally charging extra to get the 100Mbps service. Still others will see a slowdown in their service because of some poor bandwitdth management by their ISP, and sue because they think they're being throttled.
If you've never heard of it, let me try to explain "committed information rate" (or CIR). That is the rate of information transfer your ISP will guarantee you have, let's say 3 Mbps. This means that no matter what, the ISP is guaranteeing connections through your link which add up to that 3 Mbps will stay at that rate. That implies, in order to meet that guarantee, for each peering connection they have, 3 Mbps of it must be reserved exclusively for that customer. The customer's connection is often capable of much faster than that, say 100 Mbps, so it can have burst speeds of up to the technology's (or the provisioned) limit. So basically, any rate between the CIR and the provisioned rate is best effort. As you can imagine, dedicating bandwidth on multiple WAN links is an expensive proposition, as it means during some times, there is bandwidth on the link which is just sitting idle, not being used at all. Therefore, what the customer requesting the CIR is charged is priced commensurately.
CIR is typically only engineered for businesses because that CIR may be critical to providing their service (e.g., an HD video stream may require some CIR to work at all, and HD video streaming is the company's product/service). For what most folks have, there is no CIR, and it is all best-effort. It says as much in the terms of service. This leads to a commonly accepted ISP business practice called oversubscription. It simply means that the ISP counts on not every one of their customers utilizing their provisioned rate simultaneously. A typical ratio in the dialup Internet days was 10 to 1 (meaning for every 10 customers they had dialing in at 56Kbps, they would have 56Kbps capacity to their upstream Inernet provider); I'm not sure what common practice is for residential broadband.
So, let's apply this to 'Net Neutrality. Some residential "propeller head" type is going to see their connection sold as 50 Mbps only going 30 due to heavy use at the time, and is going to start shouting "throttling!" to the ISP/FCC/courts. Of course, if they wanted a CIR of 50 Mbps, it ain't gonna be no $80/month because theoretically it's not going to be shared with their neighbors on the same CMTS or DSLAM, it's going to be a guaranteed 50 Mbps. The cost to provide most service is kept really low by oversubscription and no CIR. If customers think they are consistently getting under their advertised rate, their existing remedy is something like suing for false advertising or fraud.
So there is no need for extra "you can't charge extra for an Internet 'fast lane,' or throttle my connections" regulation. Attempting to do so is just going to push everybody's monthly rate through the roof because the ISPs will be forced to do away with the oversubscription, best effort model and to a CIR model, lest they be accused of throttling or providing preferential packet treatment.
Myth: Since all TV is just bits these days, 'Net Neutrality could mandate that cable TV providers who are also ISPs (so, virtually all of them in the US) will be forced to reallocate video bandwidth to Internet servicing, lest they be accused of not treating all bits as equal. As a result, your TV will start "buffering" like your Netflix, Amazon Instant Video, iTunes, etc. does.
Truth: This is also just utter nonsense, and is proposed by those who are totally ignorant of how cable TV infrastructure is electrically engineered. Bandwidth used for video delivery cannot be arbitrarily reallocated to DOCSIS or vice-versa. It's just technically impossible. Both TVs and cable modems would have to be totally reengineered to have that capability.
This is the latest uninformed rambling by Mark Cuban and parroted by Glenn Beck. Bandwidth and frequencies are allocated ahead of time. Some channels will be dedicated to video delivery, even if it is SDV. Some bandwidth will be dedicated to telephony if the MSO decides to offer that service. It has to be. Otherwise stuff like E911 could never work; the bandwidth must be available, it can't be reallocated for something else. And finally, it is just not techincally possible to take bits being transmitted as video QAM and have them magically start being used by DOCSIS modems. DOCSIS will also have channels dedicated to it. Even if it were physically feasible to start taking video QAM bandwidth and use it for DOCSIS instead, more CMTSes would have to be deployed to do so, it can't just magically start to happen.
Besides, there is very little "buffering" in a video QAM stream. The receiving equipment very simply is not engineered to do so. It is engineered to assume that the stream is (fairly) reliable and continuous, so only a few tens of milliseconds is ever inside the TV waiting to make its way to the screen. This is a far cry from the multiple seconds of buffering which is typically done for an Apple TV, Roku, Chromecast, Amazon Fire TV, smart TV, etc. The worst that will happen if a stream is interrupted is picture macroblocking and sound dropout; there will never be any "buffering" or "rebuffering." It was just totally ignorant of Glenn to even say that.
Imposition of 'Net Neutrality will have precisely ZERO effect on delivering cable TV video streams.
1A channel is a range of frequencies. When a transmitter transmits, it is said to be transmitting with some frequency, say WBEN-AM at 930 KHz. Physics reality is that WBEN-AM is actually occupying 920.5 KHz to 930.5 KHz, which assumes their modulating frequencies are held to 0-5 KHz. The very process of modulating the carrier causes frequencies in that range to be emitted by the transmitter. (In reality, the FCC and the CRTC probably will not allocate 920 nor 940 KHz and allow WBEN-AM to modulate with better fidelity than 0 to 5 KHz.) As another example, WBFO-FM is said to transmit on 88.7 MHz. But as FM implies, the frequency is varied +/- 75 KHz, thus really occupying 88.625 to 88.775 MHz. An additional 25 KHz "guard band" on either "side" is used for spacing (to reduce interference), thus making FM broadcast channels 25 + 75 + 75 + 25 = 200 KHz wide.
2 meaning any manufacturer of equipment could submit their products to the FCC for testing to meet certain technical requirements, such as sufficient isolation of power from the phone line so that it won't zap the phone company's equipment or technicians. It also must not emit any interfereing radio waves.
Direct all comments to Google+, preferably under the post about this blog entry.
English is a difficult enough language to interpret correctly when its rules are followed, let alone when the speaker or writer chooses not to follow those rules.
"Jeopardy!" replies and randomcaps really suck!
Please join one of the fastest growing social networks, Google+!