My hipper-than-thou grandmother asked for comments on this article provoked by the 40th anniversary of the internet and the issues regarding the erosion of the openness that the internet was founded on.
So, here are four things that I’m looking forward to, or concerned about that are in the internet’s immediate future:
full standards compliance across web browsers
Web standards are becoming stronger. The latest versions of all the major web browsers support HTML 5 to some extent, and are all relatively standards compliant. The popularity of older versions of Microsoft Internet Explorer put application development into a dark age that spanned almost a decade. Internet Explorer was not standards compliant, so it made it very difficult make web applications that worked the same way on other browsers like Firefox and Safari.
The web is more standards-compliant than it ever has been, and as people and IT departments get around to upgrade their browsers, innovation on the web will flourish in ways that we have not yet seen. This is a very good thing.
Saturation of IPv4 will force rapid adoption of IPv6
The article references difficulties that some developers experience with getting applications to talk to each other across internal company networks. The inevitable coming of IP6 will make this less of an issue. IP is the addressing system used on the internet that identifies each machine on the network. All internet-connected machines use IP version 4 (IPv4), but we are approaching a point where the number of machines connected to the internet will be greater than the number of addresses that IPv4 can uniquely identify. Wikipedia says that all IPv4 addresses will be used up some time in 2010.
The IPv4 problem is the equivalent of a neighborhood policy stating that house numbers can only have three digits, and there is demand for over 1000 houses to be in the neighborhood. This creates a supply vs. demand situation, so the price of IPv4 addresses will increase as more and more of them are needed as the internet grows.
The address space problem has been mitigated to some extent by using a technique called NAT (we use it at the Old Homestead). NAT (Network Address Translation) allows a sub-network of machines to communicate with the internet using a single public IP address. While it mitigates the IPv4 address shortage problem, NAT makes it a bit more difficult to establish point-to-point communication between two computers that are each behind a NAT gateway. So, instead of making software that works like this: (computer A) <--> (computer B), you have software that works like this: (computer A) <--> (service provider) <--> (computer B). NAT is the reason behind a significant number of providers of internet services; it allows them to act as, and in some cases charge for being an intermediary between you and whoever you are trying to connect to. Services like Vonage, GoToMyPC, WebEx, and Skype come to mind as I’m writing this.
To summarize, the limits of IPv4 have made it expensive to uniquely identify computers on the internet, and have fostered a genre of companies that act as an unnecessary middle-man to many online services to solve this problem.
IPv6 solves the address space problem completely; IPv6 allows for enough addresses to uniquely identify every atom in the universe. In an ideal scenario, ISPs would offer an unlimited number of IPv6 addresses to their customers, allowing them to address their machines from anywhere without the need of a third party. The middle-man companies would have to change their business model or go out of business, and a new type of peer-to-peer software would blossom and bring about all sorts of new innovative applications to the internet. There are security implications that go along with this new model of communication, so we end up with the classic toss-up of availability vs. security, but that’s a choice that I would like to have.
We may also see new hardware applications that leverage the increased network availability that IPv6 brings for things that are currently considered too trivial for network connectivity. For example, controllable light switches that can report that they are left on, and temperature sensors that can tell you if you need to start your A/C or heater before you come home.
Two more things and I’m done:
Google, Amazon, and Microsoft all offer cloud services that promise nearly unlimited scaling and reliability for hosted web applications. This is great, but it also leads to centralization of policy between only a few companies, which may end up stifling innovation if the company that owns the cloud decides to prohibit services that compete against the ones that they already offer. This has not proved to be a problem yet, but this issue parallels the net neutrality debate and controversies around Apple’s iPhone app store to some extent.
Many services, big and small, eBay, Google, UPS, and FaceBook, just to name a few, are “opening up” their platform using standards like SOAP, REST, or plain XML. This means that they are releasing formal specifications that tell people like myself how to write software that interacts with their service and extend it in ways that the company hasn’t yet anticipated. In an interesting twist on the idea of using standards to open platforms, the RSSCloud project proposes using RSS in an innovative way to replace centralized messaging services like twitter and Facebook.
- turning the premailer interface invisible
- Opting out of Bank of America’s overdraft fee matrix