Subtitle:  The heavyweights are scrapping over this, and the little guy is bound to get crushed.

This is one of those arguments that has been going on in tech/policy circles for years, and because it’s just ever so slightly esoteric, the mass media hasn’t really picked up on it.  And that’s a shame, because it affects Joe Average more than he realizes.  In fact, Joe Average is the guy who’s most likely going to end up having to foot the bill, and it will affect how he’s able to access content on the Internet.

So What Is Net Neutrality Anyway?

Net neutrality is, in essence, a practice wherein all data on the Internet is treated equally by the service providers and carriers who manage the physical backbone of the Internet.  Traffic from all sources is delivered to the end user with the same level of priority.  No one gets any special treatment, either beneficial or detrimental.

Sounds reasonable, why is there an argument about it?

The problem arises from how the Internet has evolved and changed over time.  It used to be that bandwidth was limited, but so were the number of people with access to the Internet.  Also, the type of traffic being sent down the pipe was generally built to accommodate the limitations.  On top of this, the people accessing the Internet were generally savvy enough to use the right tools to access the different types of information and resources available.  In (relatively) simple terms, this meant that websites were accessed through HTTP, email was accessed using SMTP, and larger files were transferred using FTP.

Websites and emails were, by an large, plain text, and very small in terms of file size and bandwidth usage.  File transfers could be large, but they were handled differently and could be managed by the FTP protocol.

Over time, the waters have become much muddier.  Plain text websites and emails gave way to websites and emails with larger and larger embedded graphics, then video.  “Streaming Audio” and “Steaming Video” started to take up an increasing chunk of the available infrastructure.  Peer 2 Peer came along, and the data floodgates really opened up.  Multi player online games, which transfer large chunks of information between multiple users on a real-time, heavy demand basis came along.  “Torrents” entered the picture, and all of the sudden, anyone could transfer truly massive files amongst a wide audience.

With the advent of Internet enabled smart phones, and now with “High Speed” wireless 3G technology, all of this has become un-chained from the desktop, and can follow you as you’re walking down the street, riding the bus, or driving your car.

The load level on the backbone is growing at an exponential rate that very few people predicted.

At the same time, the technical skill of the average user fell off a cliff.  More and more, you need less and less technical skill to be able to navigate all this.  Just click on an icon and everything just works.  Joe Average isn’t going to question the demands he’s putting on the critical infrastructure of the net when he logs on to play World of Warcraft, spends hours on end clicking on YouTube videos, or fires up his P2P or Torrent software to swap music and movies with the Internet community at large.

And why should he question it?  Everything just works.  He’s not a programmer or technologist, and just wants the convenience of accessing media in the easiest possible manner.  Well designed software and websites make the whole process point and click easy, and so long as he pays the monthly bill to his Internet provider, all the technical issues are someone else’s problem.

The end result is that traffic is growing so fast that the Carriers and Internet Service Providers are bursting at the seams trying to keep up.  The very backbone of the Internet has become clogged with higher loads than it was ever designed to handle.  And the technology to improve that load handling is expensive.  Expensive to buy, and expensive to implement.  It’s not a matter of just tacking on a couple extra Servers and Routers at the local phone switch.  The physical wires themselves are funneling as much data as they can handle.  Trenching in thousands upon thousands of extra miles of fiber and cable, and laying in massive new undersea cables to connect the continents, bears an almost inconceivable price tag.  We’re talking a couple of dozen Sagan’s worth of dollars.

Something has to give, and someone has to pay for it.

Something has to give…

Basically, we’re left with three options:

1. Control, or shape the flow of traffic:

This approach essentially abandons the precept of Net Neutrality that has been in place from the dawn of the Internet.  ISPs and Carriers are, not surprisingly, in favor of this approach.  They want the ability to control the flow of heavy bandwidth applications on the Internet.  They argue that they should have the right to control the “bandwidth hogs” in order to maintain a stable infrastructure and more reliable experience for the majority of users.  There is an oft repeated quote that states that 5% of the users are using up 95% of the bandwidth at certain peak times.  So why should the 95% pay the price for the gluttony of the 5%?

The problem with this statistic is that it is at best a guesstimate.  At worst, it’s a self serving fabrication.  Go ahead and prove me wrong on this, if you can.  You can search for a week, and while you’ll come up with a ton of people who quote this statistic, you simply won’t find a reliable source study for it.

The truth is that trying to figure out exactly what and who is using up all the bandwidth at any given time is a far more challenging technical task than you might think.  No one knows, really, what’s using up all that bandwidth.  The same people who quote the 95/5 ratio are often the same people who quote another statistic with very little basis in any provable study.  Namely that Spam accounts for almost half the traffic on the net.

So, what is it?  Is Spam using up half the Internet?  Or are the YouTubes and BitTorrents, Gamers, and other hogs using up 95%?  You can’t have it both ways.

So, by shaping traffic, you end up punishing people, the “Hogs”, for a problem they might not even be causing.  This doesn’t strike me as particularly fair.  And it also strikes me as a chilling way of stifling growth and innovation.

Also, how do you determine when someone crosses the line into being a hog?  After their 5th YouTube video in an hour?  Right off the bat at their first?  Do you punish the researcher pulling in an experimental Linux distribution over BitTorrent (perfectly legal in any country) because the file is 4GB and Torrents use up huge swaths of bandwidth in a distributed way?

Or do you go after the individual websites, who are already paying large bandwidth fees, for serving up endless streaming music and videos?

Think about it, I mean really think about it for a minute.  Where is the line?  Who gets pushed to the margin and has their flow of data and information squeezed through a tourniquet?

2. Make the DISTRIBUTOR pay:

Hey, YouTube, you’re breaking the nets.  Time to pony up for the bill.  Same goes for you, HuLu.  And all you indy webcasters serving up streaming audio and video?  Better look between the cushions of the couch, because you’re gonna need to cough up some change, too.

I’ll admit, on the surface this seems somewhat reasonable.  There are certain sites and types of traffic that are hugely demanding on the infrastructure of the internet.  And, by and large, these services aren’t engaged in charity.  Through a variety of models (advertising, paid memberships, etc.), they’re all making a buck off the web, so why shouldn’t they pay?

Because they already do pay.

Dedicated hosting with good access to the backbone costs money.  The Data Centers in turn pay large fees to the Carriers and ISPs for this access.

The problem arises from the fact that the tolls are all paid at the point of entry.  Say, for instance, you have a funny animal video website and your Data Center is in Calgary (Canada). Your access fees are going to be funneled into either Telus Communications or Shaw Communications.

But once your data is unleashed on the Internet, it’s not going to play nice and stay within those point of entry service providers.  It’s going to cross corporate boundaries, and international borders.  In all likelihood, some ISP in Australia is going to have to cover the cost of the “Last Mile” delivery of some of that data to an outback farmer.  And dozens of ISPs will have carried the traffic between the two.

Now, all the Telcos and ISPs have revenue sharing and distribution agreements in place.  International agreements have been in place since the dawn of the long distance phone call.  In practice however, it’s an unfair sharing agreement, weighted towards rewarding the entry point providers.  In an age where everything was voice communication, this didn’t really matter.  Things had a tendency to balance themselves out.  The Telcos with the largest user bases, had the biggest costs, but also the biggest revenue streams.  Smaller providers had smaller user bases, and lower costs.

And everyone was subsidized by their respective governments.  Yes, even in the US, where the subsidies were hidden inside government and military procurement and research contracts.

The Internet has changed a lot of this.  Heavy data providers can be anywhere, and are frequently located in sparsely populated areas to take advantage of cheap real estate and electricity.  Governments are getting out of the communication subsidy game in many regions.  And the Telcos no longer have a monopoly.  In most regions there are multiple entry point providers, and they’re all fighting tooth and nail to keep as much of the pie as they can in order to increase their own profits and cover the cost of their own growth.

Given that these companies have essentially log-jammed revenue sharing for years now, what do you think the odds are that they’ll be able to come to an agreement on how to “share” increased entry point fees?

In the real world?

3. Make the end user pay:

In a fair, rational world, this is actually the most sensible option.  It puts the burden of cost on the end consumer, where it has rested traditionally since the dawn of capitalism.

And really, you’re not going to get a lot of argument from me that this is the sensible option.  The “Last Mile” cable is, in macroeconomic terms, the most expensive part of the network.  With reasonable cost of provision + margin pricing, this would go a long way to paying the costs of upgrading the backbone.  Last mile ISP providers would in turn pay a fair rate to Carrier ISPs (and often these companies are one in the same), who could in turn afford to upgrade the backbone carrier lines and infrastructure.

But that only works in a fair and rational world.  Not the world we actually live in.

Where this starts to break down is that the internet has really gotten people used to the idea of “free” content (a myth, there is no free lunch, even on the internet – but that’s a different rant altogether).  People expect to pay a nominal flat fee and get have the vast resources of the internet blazing down the pipe at warp speed.

And with the cut throat pricing in the end user market, it’s very difficult for service providers to charge what it actually costs to make the whole thing work.  ISPs are realists, and they know that if they increase costs, even a little, for the end user, they’re going to face a backlash.  So instead they play little underhanded games and try and chase elsewhere to recoup their costs.

What do you mean, underhanded games?

I mean Traffic Shaping, and I mean sneaky little fine print in your service contract.

A number of ISPs are already practicing traffic shaping, and a few have even been called on the carpet for it.  Technically, it’s against the rules, and various regulatory agencies have gotten involved.

The ISPs defend their actions by saying they need to engage in these practices in order to preserve the integrity of the infrastructure, and that the fine print of your service contract allows them to do it.  You see, most ISP agreements with end users allow them to block or throttle traffic if you’re using your home service to operate a server.  In the case of P2P traffic, you’re effectively running a “server-like” service by handling large uploads and downloads simultaneously.

But man, that’s really stretching it.  What their really doing is cutting back on service they never had any intention of giving you in the first place.

When you sign up for internet service, the service providers make sure they advertise the maximum upload and download speeds available to their customers.  Then, in really, really, really fine print, they add riders and qualifications to these maximums.  If you read carefully, you’ll see that these maximums are only available during non-peak traffic times, are dependent on “upstream” bottlenecks in the network, the phase of the moon and the stars, and the alignment of Pluto to Virgon Alpha 5.

In practice, you might be lucky to hit those maximums for a few seconds a day, if that.  Under ideal circumstances.

The fine print also warns that if they determine that you’re running a server, or server like serivce (read: use P2P software), they’re going to tie a hose clamp to the pipe up to your house and twist it as tight as they like.

And you have to look deep into their websites to find any reference to the fact that they can (and do), limit total bandwidth in any given month.  And once you exceed that allotment, they cut back your available speed, or charge you extra, or some combination of the two.

And that’s where I get annoyed.  Don’t advertise one thing and deliver something else altogether.  Be honest with us.

I know I’m a bandwidth hog.  I work in IT, and I spend 80% of my waking hours online.  I move big files around, up and down the pipe, sometimes for work, and sometimes for personal reasons.

I pay for the highest grade of residential service available to me to make it all possible. And, you’ll be shocked to know, I expect to get what I pay for, especially if it’s the only option available to me.

Would higher fees make me happy?  Nope.  I’m a cheap bastard.

Would they tick me off less than the underhanded bandwidth throttling I have to put up with now?

You better believe it.  I’m never happy about having to pay extra.  But I’m always willing to do it if that provides the service or equipment that I need (or want) to get a given task done.

I don’t buy cheap tools.  For a reason.  I don’t buy cheap electronics.  For a reason.  I’ve learned through bitter experience that cheap does NOT pay in the long run.  I’m always willing to pay extra for something that’s going to work best for my needs.

So is that the solution?  Charge the 5% of users more money?

Actually, yes, it is.  Mostly.

The ISPs need to get their business model straight.  Stop trying to subsidize your service with hidden traps like traffic shaping, and fine print disclaimers.  Tell me what I’m going to get.  Tell me what I have to pay to get it.

If the ISPs were honest with their customers, they would face a whole lot less grief in the long run.  Sure, people might gripe when their bill goes up.  Initially.

But the market has always shown that  there will be a segment of consumers that are willing to pay a premium price, for a premium service.

And the companies that accept this, are the ones that are going to win out in the long haul.

So, hey, Rogers/Shaw/Bell/Telus/Comcast/AT&T/Sprint etc. etc.

Quit being wieners and pointing fingers.

Charge people a fair and reasonable rate.  Be honest about the service you’re giving them.  The market will reward you.

Something to say?

%d bloggers like this: