To the generation that grew up having access to the online world, the Internet seems an essential part of modern life that was simply destined to come into being. Because of this, it may come as a surprise to learn that the public almost did not get the Internet. For several decades, the only people who were able to use the pre-World Wide Web networks were in the military and in academia. This computer network was too expensive for the everyday person to afford, and the telecommunications companies resisted any attempts that the federal government made to extend funding to the private sector. However, despite this apparent impasse, the government, academia, and the early computer companies collaborated in a way unseen today in order to bring the Internet to the American public.
One of the earliest computer networks arose in the early 1970’s and was a series of connections between universities. Academics used this network, called the ARPANET, to share research within their own university and with other universities. In order to do so, they would transmit information electronically from primitive computers through AT&T’s underground telephone wires. The universities that had these connections benefitted immensely because they were connected to other research institutes. Furthermore, they also had the first supercomputers that attracted the top scientists from around the U.S.
A key drawback to this computer network was the exorbitant cost of leasing access to these wires. These leases cost thousands of dollars per month because the price depended on the amount of wire it took to connect two points. For example, two of the first universities to use the ARPANET were Michigan State University and Ohio’s Case Western Reserve University, which are 233 miles apart. In order to connect these two schools, they had to lease over 200 miles of underground telephone wire. Although the government provided funding for the universities to conduct research using this network, the money to build and use the infrastructure still had to come out of the schools’ coffers.
Part of the reason it cost universities so much money to lease these lines was due to the fact that AT&T was the only company that could provide this service reliably. It was the most established phone company at the time and easily established a monopoly, enabling it to charge what it pleased. Universities tried to mitigate costs by having other universities join their network. If two schools found another university located geographically between them, they could split the cost three ways and benefit from the added resources each school offered.
Through government grants and a larger network of connected universities, computer scientists were able to develop better and faster networks. It was not long before people outside of the government and academia started hearing about the concept of “the Internet” and wanted to have access to it. Connecting to the Internet was still incredibly expensive, but people within academia and the National Science Foundation (NSF) believed that it was important to connect the public to this growing network. They went to Congress and argued that the best way to ensure feasible access for the rest of the country would be through a federal spending initiative.
Unfortunately, AT&T’s lobbyists blocked the NSF’s attempts to create a network similar to the one we have today. The lobbyists objected to the federal government subsidizing Internet access for the public, claiming that by doing so the government would be interfering with the private sector. In order to get around this, universities requested the government only pay for connections between the universities. The lobbyists did not see this as much of a market, so they agreed. Although this was not a direct way to provide Internet access to the public, people in the government hoped that there would be a trickle-down effect.
Academia, the federal government and the public got their wish when a young company called MCI Communications Corp. challenged AT&T’s monopoly in 1974. As a result of extensive litigation, MCI broke this monopoly in 1980 and enabled the overhaul of telecommunications in 1996. Universities had been trying to get around the “academics only” policy regarding Internet access since 1990 by providing indirect access to the public. It was evident to lawmakers in the 90’s that the public wanted the Internet and that companies and universities would continue to work to provide Internet services to the public regardless of any obstacles. In order to fulfill the needs of the public and energize the communications market, the Clinton administration signed into law the Telecom Act of 1996.
By this time, networks had begun to spring up all over the world. Computer scientists from across the globe collaborated to create browsers, systems, and interfaces to make the Internet more user-friendly and useful. Everyday people began to meet each other through newsletters and email. Companies were able to reach distant markets and people experienced an ease of access to information that forever changed the way they related to the world. Around the globe, computers dialed up and the electronic crowing of modems heralded the dawning of the Internet Age.
Click here for Part 1.
Note: Liz’s blogging challenge at Eccentric inspired me to write this historical article on the Internet.
The majority of my research came from the Coursera course “Internet History, Technology, and Security.” It was a wonderful course taught by Dr. Charles Severance at the University of Michigan.