History Of Web And HTTP

Home /

Table of Contents

We begin our exploration of Application Layer protocols with HTTP. As previously stated, we begin with the Web because it is not only a hugely popular application, but also its application-layer protocol, HTTP, is simple and easy to understand. We will take it slow. We will first discuss why the Web was invented in the first place. Then we will talk about HTTP as the application layer protocol backing the Web. You may not need to know the history of the Web or HTTP to create a Network Application. Understanding the web’s history is important because it provides context and understanding of how the technology has evolved over time. This knowledge can help make better decisions about web design and usage, as well as anticipate and plan for future developments. Understanding the web’s history can also provide insight into the technology’s societal and cultural impact.

The Cold War And The ARPANET

Let us go back to the year 1957. Before the late 1950s, most computers used to large and expensive mainframe machines. Only a few big corporations could afford them. On top of that these computers could perform only one task at a time. What it means it that, back then only one user could interact with the computer at a time. All the others that wanted to use the machine had to to wait for their turn. These computers were mainly used for large scale data processing and scientific calculations. To operate them there were specialized teams of engineers and scientists.

This changed in 1957 with the introduction of the IBM 305 RAMAC, the first computer to use a hard disk drive (HDD) as a storage device. This allowed multiple users to access the computer at the same time and perform different tasks, which was a significant advancement in computer technology. This development marked the beginning of the era of time sharing systems, which allowed multiple users to share a single computer and its resources. This was a key step towards the development of modern computing, where multiple users can interact with a computer simultaneously and perform different tasks.

On October 4th 1957, during the Cold War the first unmanned satellite, Sputnik 1, was sent into orbit by the Soviet Union.

Sputnik 1 was the first artificial satellite to be launched into orbit. The launch of Sputnik 1 was a significant event in the Cold War because it demonstrated that the Soviet Union had the capability to deliver a nuclear weapon to the United States using a missile. This was a significant concern for national security and marked the beginning of the space race between the United States and the Soviet Union.

In response to the launch of Sputnik 1, the United States government established ARPA (Advanced Research Projects Agency) in 1958. The goal of ARPA was to support scientific research in areas that were considered important for national security, such as computer science, information technology, and advanced communications. This was seen as necessary to ensure that the United States had the technological advantage over the Soviet Union in these areas, which was considered crucial for national defense.

In 1958, knowledge transfer was primarily done through face-to-face communication and written documents. People communicated with one another in person or through mail, telephone, and telegraph. Books, journals, and other printed materials were the main means of disseminating information.

The technology for transferring knowledge was relatively limited at the time, and it was not as easy or efficient as it is today. It took longer for information to be shared and received, and the process was often hindered by geographical distances and lack of access to technology.

However, there were some early examples of the use of technology for knowledge transfer, such as the use of telex machines to send typed messages and the use of telephones to communicate with people in different locations. Additionally, some institutions, such as libraries and research institutions, had begun to use computer systems to store and retrieve information. But it was not widespread.

The ARPA planned a large-scale computer network in order to accelerate knowledge transfer. This network would become ARPANET.

Furthermore three other concepts were to be developed, which are fundamental for the history of the Internet. The concept of a military network by the RAND corporation in America. The commercial network of the National Physical Laboratory in England. And the scientific network, Cyclades, in France.

The scientific, military and commercial approaches of these concepts are the foundation of our modern Internet.

Let’s begin with the ARPANET. Its development began in 1966.  The main goals of the ARPANET project was to accelerate the transfer of knowledge among researchers and to avoid duplication of effort. The idea was to create a large-scale computer network that would connect researchers at different institutions, allowing them to share information and collaborate on projects more easily.

Universities were generally quite cautious about sharing their computers. This was partly because of concerns about security and the potential for system crashes, and partly because the cost of computer time was very high. To overcome this reluctance, the ARPANET team implemented a strategy of connecting smaller, less expensive computers called Interface Message Processors (IMPs) to the mainframe computers. The IMPs acted as a “front end” to the mainframe, allowing users to access the mainframe’s resources remotely.  This approach made it possible for researchers at different institutions to share the resources of a mainframe computer without having direct access to the machine itself.  This concept of connecting smaller computers to larger ones, also known as a “host-to-IMP” architecture, was a key factor in the success of the ARPANET. It made it possible for researchers at different institutions to share resources and collaborate on projects without having to worry about the security and reliability of the mainframe computers. Additionally, it also allowed for more efficient use of the computer resources and allowed for the network to expand in a more manageable and affordable way.

For the first connections between the computers the Network Working Group developed the Network Control Protocol.

The Network Control Program (NCP) was the first networking control protocol developed for the ARPANET. It was developed by the Network Working Group (NWG), which was a group of engineers and computer scientists who were responsible for designing and implementing the ARPANET. The NCP was the first protocol used to control the communication between the different nodes on the network and it was used from the inception of the network in 1969 until the early 1980s.

The NCP was developed to provide basic communication services such as routing, error checking, and flow control. It used a system of logical addresses to route data packets between the different nodes on the network, and it performed error checking on the data packets to ensure that they were transmitted correctly. The NCP also implemented flow control to prevent the network from becoming overwhelmed with too much data.

The NCP was a significant achievement for the NW Network Working Group and was an important step in the development of the Internet. However, as the network grew and new requirements emerged, it became apparent that a new set of protocols was needed. This led to the development of the Transmission Control Protocol/Internet Protocol (TCP/IP) in the early 1980s which replaced the NCP and became the standard protocol for the Internet.

Let’s take a short detour to England. Since the NPL network was designed as a commercial basis a lot of users and file transfer were expected. In order to avoid congestion of the lines, the sent files were divided into smaller packets which were put together again at the receiver. The packet switching was born! The NPL network was one of the first operational packet switching networks and it served as an inspiration for the development of the ARPANET.

In 1962 American ferret aircrafts discovered middle and long range missiles in Cuba, which were able to reach the United States. For the American officials, the urgency of the situation stemmed from the fact that the nuclear-armed Cuban missiles were being installed so close to the U.S. mainland–just 90 miles south of Florida.

The presence of these missiles in Cuba represented a significant threat to the security of the United States, as they had the capability to reach most of the continental United States with nuclear weapons. In response, the United States implemented a naval blockade of Cuba and demanded that the Soviet Union remove the missiles. This standoff between the United States and the Soviet Union was considered the closest the world ever came to nuclear war and it was one of the defining moments of the Cold War.

During the Cold War, there was a concern that the centralized network architecture used in most information systems at the time would be vulnerable to a nuclear attack. A centralized network architecture, in which all data and control functions are concentrated in a single location, would be vulnerable to a single point of failure. If a single node or location were to be destroyed in an attack, the entire system would be disabled.

To address this concern, the ARPANET project, which was led by ARPA, aimed to develop a decentralized network architecture. The network was designed to be decentralized, meaning that if one part of the network went down, the rest of the network would still be able to function.

This was achieved by connecting small, less expensive computers called Interface Message Processors (IMPs) to the mainframe computers as mentioned above.

This decentralized network architecture served as the foundation for the development of the internet as we know it today and it was a key innovation in the field of computer networking. The ability to share information and collaborate in real-time across different institutions and locations greatly accelerated the pace of scientific research and innovation

During the Cold War, most communication systems used radio waves to transmit signals. This included long-wave radio waves that were used for long-distance communication, such as those used in the early versions of the ARPANET. However, there was a concern that in case of a nuclear attack, the ionosphere would be affected, and the long-wave radio waves would not be able to penetrate it, causing communication to fail.

The ionosphere is a layer of the Earth’s atmosphere that is ionized by solar radiation. It reflects certain frequencies of radio waves, including long-wave radio waves, allowing them to travel long distances. However, a nuclear explosion would produce a high-energy electromagnetic pulse (EMP) that could disrupt the ionosphere, causing it to expand and absorb radio waves, making communication impossible.

Therefore they had to use direct waves. Direct wave communication, also known as line-of-sight (LOS) communication, is a method of wireless communication that uses radio waves that travel in a straight line, between the transmitter and the receiver. This type of communication has a shorter range than radio waves that can bounce off the ionosphere, and it is limited by the curvature of the Earth and the presence of obstacles. It is often used for short-range communication such as in microwave links and WiFi networks.

A better solution was the model of a distributed network. Thus long distances could be covered with a minimum of interference.

A distributed network model is a better solution for covering long distances with a minimum of interference. In a distributed network, the resources and functions of the network are spread out across multiple nodes or locations, rather than being concentrated in a single location. This allows for more efficient use of network resources and makes the network more resilient to failures.

A distributed network architecture is well suited for covering long distances because it allows for multiple paths for data to travel. This means that if one path is blocked or becomes unavailable, the data can be routed through another path, reducing the risk of a total system failure.

The ARPANET, which was the precursor to the internet, was one of the first operational examples of a distributed network.

The Cyclades network, which was developed in France in the early 1970s, was another important milestone in the development of computer networking. Like the ARPANET, the Cyclades network was designed as a distributed network, but it had a smaller budget and fewer nodes. Instead of focusing on connecting researchers at different institutions, the Cyclades network focused on connecting different networks together, which is where the term “inter-net” was first used.

The Cyclades network was notable for its end-to-end principle, which emphasized that communication between sender and receiver should be direct, and that computers should not intervene in the communication process. To achieve this, the Cyclades network used a protocol that was implemented at the hardware level, which provided a direct connection with the receiver. This approach was different from the ARPANET and the NPL network, which relied on software protocols that were implemented at the application level.

The end-to-end principle, which was first introduced in the Cyclades network, is considered to be one of the key innovations of the Internet. It means that the communication between sender and receiver is direct, and that the network’s job is simply to transfer the data, without adding any additional functionality. This approach allows for more efficient use of network resources and reduces the risk of system failures.

The development of the Cyclades network, along with the ARPANET and NPL network, were important milestones in the development of the internet as we know it today. The concept of interconnecting different networks with the end-to-end principle, as well as the packet switching, were key innovations that formed the foundation of the internet.

The TCP protocol was developed by the Advanced Research Projects Agency Network (ARPANET) to connect computers through gateways. The OSI Reference Model, developed by the International Organization for Standardization (ISO), aimed to standardize network communication by dividing it into separate layers. The OSI model is a model used to understand how the different layers of communication protocols work together to enable the flow of data. While the OSI model does not dictate how the networks should be implemented, it provides a framework for understanding the different functions that are required for successful communication.

The OSI model, while providing a useful framework for understanding network communication, had some limitations in its ability to be implemented in practice. One limitation was that it was overly complex and difficult to implement in some cases. Additionally, the OSI model was developed by the International Organization for Standardization (ISO), which was a relatively slow-moving organization and it took a long time for the OSI standard to be finalized.

On the other hand, the TCP was developed by the US Department of Defense’s Advanced Research Projects Agency (ARPA) which had the goal of connecting multiple networks together. The TCP was designed to be simple and efficient, and it was able to work with the existing networks of the time.

In view of the limitations of both TCP and OSI model, the TCP/IP protocol was created as a solution that could incorporate elements of both. TCP/IP was designed to be a more practical and efficient alternative to the OSI model, and it was able to be implemented in existing networks more easily. It also allowed for the creation of a global network of interconnected computers, known as the Internet.

The Emergence Of World Wide Web

So what’s the web? It used to be difficult to explain what the web would be like. Now it’s difficult to explain why it was difficult?

CERN, the European Organization for Nuclear Research, is a leading research facility in the field of high energy physics. High energy physics is the study of subatomic particles, such as protons and electrons, which are much smaller than atoms.

To study these particles, scientists at CERN use large machines called accelerators, which are used to accelerate subatomic particles to extremely high speeds. These particles are then smashed together, creating high-energy collisions that can produce new, short-lived particles. To study these particles, CERN uses large detectors, which are complex instruments that can detect and measure the properties of the particles produced in the collisions.

These detectors are typically large in size, sometimes the size of a house, and they are used to study the properties of the particles produced in the collisions and to identify new particles that may have been created. The data gathered from these detectors is then analyzed by scientists to gain insight into the fundamental nature of matter and the universe.

CERN is a large organization with a significant number of employees, including scientists, engineers, technicians, and administrative staff. Many of the scientists who work at CERN are affiliated with universities and research institutions around the world and they come to CERN to use the organization’s large accelerator facilities to conduct high energy physics research.

CERN is not just a standalone laboratory, it is an extensive community that includes more than 17000 scientists from over 100 countries. These scientists typically spend some time working on-site at CERN but also continue their research at universities and national laboratories in their home countries. Due to this large and geographically dispersed community, reliable communication tools are essential to facilitate collaboration and the sharing of information and resources among the scientists.

And there was another challenge that existed with the traditional Internet. The information was scattered across different computers and users had to log on to different systems to access it. Additionally, different computers used different software and commands, making it difficult for people to learn how to use them all. This often led to people seeking help from colleagues during coffee breaks.

At CERN, the situation was further complicated by the fact that scientists from all over the world brought their own computers and software with them, resulting in a diverse and heterogeneous environment. This made it difficult to share and access information across different systems and platforms.

To address these challenges, Tim Berners-Lee, a researcher at CERN, proposed the development of a system that would convert information from one system to another, making it accessible to everyone. This idea eventually led to the creation of the World Wide Web, which uses a standardized language (HTML) and a common protocol (HTTP) to share information and resources, making it much easier for users to access and share information across different platforms and devices.

Now one thing we want you to be clear about is that Tim Berners-Lee did not create the Internet. The Internet was developed in the 1960s by the United States Department of Defense’s Advanced Research Projects Agency Network (ARPANET). The Internet was created as a means of connecting different computer networks together and allowing for the sharing of information and resources among researchers and academics.

When Tim Berners-Lee was developing the World Wide Web, most of the underlying technology and infrastructure for the Internet was already in place. He built upon this existing infrastructure to create a new system that made it easier for people to access and share information on the Internet. Specifically, he proposed the use of hypertext and a common protocol (HTTP) to share information, and he created HTML, which is a standardized language used to create and display web pages. The World Wide Web, which is built on top of the Internet, made it much easier for people to access and share information on the Internet and it has since become the most widely used application on the Internet.

The Internet Protocol (IP) and Transmission Control Protocol (TCP) were developed by Vint Cerf and his team in the 1970s. These protocols form the foundation of the Internet and allow for the communication and transfer of data between different computer networks.

The Domain Name System (DNS) was developed by Paul Mockapetris and others, it is a hierarchical decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It is used to translate human-friendly domain names, such as “enablegeek.com” into IP addresses that computers can understand.

Before the World Wide Web, people had already used TCP/IP and DNS to create email and other applications on the Internet. Tim Berners-Lee built on this existing infrastructure and proposed the use of hypertext links and a common protocol (HTTP) to create a new system, the World Wide Web, which made it much easier for people to access and share information on the Internet. He did not invent the concept of hypertext links, it had been thought about and proposed by other people, such as Vanevar Bush in 1945 and Ted Nelson, who coined the term “hypertext”.

While working at CERN, Tim Berners-Lee became frustrated with the difficulties of finding information stored on different computers. He submitted a proposal on 12 March 1989 titled “Information Management: A Proposal” to the management at CERN, in which he outlined the concept of a “web” of interconnected documents and resources that could be accessed using a system called “Mesh”. This system was based on the ENQUIRE project he had built in 1980, which was a database and software project that used hypertext links embedded in text to connect documents.

In the proposal, Berners-Lee explained that this system would allow users to skip to different documents and resources with a click of the mouse, and he also noted the possibility of multimedia documents including graphics, speech and video, which he called “hypermedia”. This proposal laid the foundation for the World Wide Web.

When Tim Berners-Lee first submitted his proposal for the World Wide Web, it attracted little interest within CERN. However, his manager, Mike Sendall, encouraged him to continue working on the project and provided him with a NeXT workstation to begin implementing the system. Berners-Lee considered several names for the project, including Information Mesh, The Information Mine, or Mine of Information, but ultimately settled on the World Wide Web.

Despite initial lack of interest in his proposal, Berners-Lee found an enthusiastic supporter in his colleague Robert Cailliau, who helped promote the project throughout CERN. Berners-Lee and Cailliau also pitched the World Wide Web at the European Conference on Hypertext Technology in September 1990, but did not find any vendors who were able to appreciate the vision of the project. Despite this, Berners-Lee continued to work on the project and in 1991 the first website was published on the World Wide Web, and it quickly gained popularity among researchers and academics.

That is a simplified but accurate explanation of how Tim Berners-Lee created the World Wide Web. He took the idea of hypertext, which had been proposed by others before, and connected it to the existing Internet infrastructure of TCP/IP and DNS. This allowed him to create a system of interconnected documents and resources that could be accessed using hypertext links and a common protocol (HTTP). The combination of these technologies and ideas resulted in the creation of the World Wide Web, which made the Internet more accessible and user-friendly for a wider range of people.

Tim Berners-Lee’s breakthrough was to combine the concept of hypertext, which had been proposed before, with the existing infrastructure of the Internet, specifically the Internet Protocol (IP) and Transmission Control Protocol (TCP) and the Domain Name System (DNS). In his book “Weaving The Web,” he explains that he had repeatedly suggested to members of both the technical and the hypertext communities that a marriage between the two technologies was possible but when no one took up his invitation, he decided to take on the project himself.

To make this marriage possible, Berners-Lee developed three essential technologies:

    A system of globally unique identifiers for resources on the Web and elsewhere, the universal document identifier (UDI), later known as uniform resource locator (URL) which is a reference to a resource, such as a webpage or file, on the internet

    The publishing language Hypertext Markup Language (HTML), which is the standard markup language used to create web pages

    The Hypertext Transfer Protocol (HTTP), which is a communication protocol used to transfer data over the web.

These technologies, in conjunction with TCP/IP and DNS, form the foundation of the World Wide Web, which made it much easier for people to access and share information on the internet.

With the help of Robert Cailliau, Tim Berners-Lee published a more formal proposal on 12 November 1990, outlining his vision for a “hypertext project” called World Wide Web (abbreviated “W3”). This proposal described a system of “hypertext documents” that could be accessed and viewed using “browsers” using a client-server architecture.

The proposal was modeled after the Standard Generalized Markup Language (SGML) reader Dynatext, a system developed by the Institute for Research in Information and Scholarship at Brown University. However, the Dynatext system was deemed too expensive and had an inappropriate licensing policy for use within the high energy physics community, as it required a fee for each document and each document alteration. Berners-Lee’s proposal, on the other hand, was designed to be free and open for anyone to use, which helped it to quickly gain popularity among researchers and academics and eventually the general public.

By the time Berners-Lee published his proposal for the World Wide Web on 12 November 1990, the Hypertext Markup Language (HTML) and Hypertext Transfer Protocol (HTTP) had already been in development for about two months and the first web server was about a month away from completing its first successful test.

In his proposal, Berners-Lee estimated that a read-only version of the Web would be developed within three months and that it would take another six months to enable authorship capabilities so that readers could create new links and new material, and receive automatic notifications of new material of interest to them. These predictions were quite accurate as the first website went live on August 6, 1991, and authorship capabilities were added soon after.

By December 1990, Tim Berners-Lee and his team had developed all the necessary tools for a working World Wide Web, including the HyperText Transfer Protocol (HTTP), the HyperText Markup Language (HTML), the first web browser (named WorldWideWeb, which also functioned as a web editor), the first web server (later known as CERN httpd) and the first website (http://info.cern.ch) containing information about the World Wide Web project was published on December 20, 1990.

The browser created by Berners-Lee’s team was able to access Usenet newsgroups and FTP files as well. They used a NeXT computer as the web server and to write the web browser.

Working alongside Berners-Lee at CERN, Nicola Pellow also developed a simple text-based browser that could run on almost any computer, called the Line Mode Browser, which worked with a command-line interface. This browser helped to make the World Wide Web accessible to a wider range of users, including those without access to more advanced computers or graphical user interfaces.

One of the most important aspects of the World Wide Web’s success is that many people and organizations have created web servers that all use the same protocols, such as HTTP, URLs, and HTML. This allows for seamless communication and sharing of information across the internet.

Tim Berners-Lee, the inventor of the World Wide Web, has also played a crucial role in persuading people to join in and use these common protocols. He started the World Wide Web Consortium (W3C) to bring together people and companies who believe in the importance of the web and work on making it even better and more powerful. As the director of W3C, Berners-Lee continues to play a key role in this effort, but now thousands of people are working on a wide range of projects to improve the web.

In the next tutorial we will be talking about Hypertext, HTML and HTTP.

Share The Tutorial With Your Friends
Twiter
Facebook
LinkedIn
Email
WhatsApp
Skype
Reddit

Check Our Ebook for This Online Course

Advanced topics are covered in this ebook with many practical examples.

Other Recommended Article