Home | Information Technology | Computer | Internet | Networking | CCNA | Abbriviations | About Us

 

Internet


The Internet

 

 

The Internet is an outgrowth of a network established in 1960s to meet the needs of reaserchers working in the defense industry in the USA. That was called the ARPANET. Many people have helped in the development of internet. The initial phase in the develpment of internet began way back in 50s. In order too regain the space supremacy from USSR (which they stole from US by launching sputnik in 1957), the US government created an agency called ARPA (Advanced Reaserch Projects Agency), with J.C.R Licklider as the  head of computer department.

Today's computer cummunication networks are based on a technology called packet switching. This technology, which arose from DARPA - sponsored reseach in the 1960s, is fundamentaly different from the technology that was then employed by the telephone system (which based on "circuit switching") or by the military messaging system(which was based on "message switching").

These efforts came together in 1977 when a four-network demonstration was connducted linnkinng ARPANET, SATNNET, Ethernet and PRNET. The sattelite effort, in particular, drew  international involvement from partcipants in the UK, Norway, and later Italy and Germany.

The name "Internet" refers to the global seamless interconnection of networks made possible by the protocols devised in the 1970s through DARPA-sponsored research, the innternet protocols, still inn use today.

 

 

HISTORY OF THE INTERNET

YEAR BY YEAR

 

1962:

RAND Paul Baron, of the RAND Corporation (a government agency), was commissioned by the U.S. Air Force to do a study on how it could maintain its command and control over its missiles and bombers, after a nuclear attack. This was to be a military research network that could survive a nuclear strike, decentralized so that if any locations (cities) in the U.S. were attacked, the military could still have control of nuclear arms for a counter attack. Baran's finished document described several ways to accomplish this. His final proposal was a packet switched network.

"Packet switching is the breaking down of data into data grams or packets that are labeled to indicate the origin and the destination of the information and the forwarding of these packets from one computer to another computer until the information arrives at its final destination computer. This was crucial to the realization of a computer network. If packets are lost at any given point, the message can be resent by the originator".

Back bones: None - Hosts: None


1968:

ARPA awarded the ARPANET contract to BBN. BBN had selected a Honeywell minicomputer as the base on which they would build the switch. The physical network was constructed in 1969, linking four nodes: University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and University of Utah. The network was wired together via 50 Kbps circuits.

Backbones: 50Kbps ARPANET - Hosts: 4

 


1972:

The first e-mail program was created by ray Tomlison of BBN. The Advanced Research Projects Agency (ARPA) was renamed.
The Defense Advanced Research Projects Agency (or DARPA).ARPANET was currently using the Network Control Protocol or NCP to transfer data. This allowed communications between hosts running on the same network.

Backbones: 50Kbps ARPANET-Hosts: 23

 

 

1973:

Development began on the protocol later to be called TCP/IP, a group headed by Vinton Cerf from Stanford and Bob Kahn from DARPA developed it. This new protocol was to allow diverse computer networks to interconnect and communicate with each other.

Backbones: 50Kbps ARPANET-Hosts: 23+

 

 

1974:

First use of term Internet by Vint Cerf and Bob Kahn in paper on Transmission Control Protocol.
Backbones: 50Kbps ARPANET-Hosts: 23+

 

 

1975:

The USSR launches Sputnik, the first artificial earth satellite. In response, the United States forms the Advanced Research Projects Agency (ARPA) within the Department of Defense (DoD) to establish US lead in science and technology applicable to the military.

Backbones: None - Hosts: None

 


1976:

Dr. Robert M. Metcalfe develops Ethernet, which allowed coaxial cable to move data extremely fast. This was a crucial component to the development of LANs. The packet satellite project went into practical use. SATNET, Atlantic Packet Satellite network, was born. This network linked the United States with Europe. Surprisingly, it used INTELSAT satellites that were owned by a consortium of countries and not exclusively the United States Government.UUCP (Unix-to-Unix Copy) developed at AT&T Bell Labs and distributed with UNIX one year later.

The Department of Defense began to experiment with the TCP/IP protocol and soon decided to require it for use on ARPANET.

Backbones: 50Kbps ARPANET, plus satellite and radio connections-Hosts: 111+


1979:

USENET (the decentralized news group network) was created by Steve Bellovin, a graduate student at University of North Carolina, and programmers Tom Truscott and Jim Elis. It was based on UUCP.

The Creations BITNET, by IBM, "Because its Time Network", introduced the "store and forward" network. It was used for email and listservs.

Backbones: 50Kbps ARPANET, plus satellite and radio connections-Hosts:111+

 

 

1981:

National Science Foundation crated backbone called CSNET 56Kbps network for institutions without access to ARPANET. Vinton Cerf proposed a plan for an inter-network connection between CSNET and the ARPANET.

Backbones: 50Kbps ARPANET, 56Kbps CSNET, plus satellite and radio connection-Hosts: 213

 

 

1983:

Internet Activities Board (IAB) was created in 1983. On January 1st, every machine connected to ARPANET had to use TCP/IP. TCP/IP became the core Internet protocol and replaced NCP entirely.

The University of Wisconsin created Domain Name System (DNS). This allowed packets to be directed to a domain name that would be

translated by the server database into the corresponding IP number. This made it much easier for people to access other servers, because they no longer had to remember numbers.

Backbones: 50Kbps ARPANET, 56Kbps CSNET, plus satellite and radio connections - Hosts: 562

 

 

1984:

The National Science Foundation began deploying its new T1 lines, which would finish by 1988.

Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts 1961

 

 

1986:

The Internet Engineering Task Force or IET was created to serve as a forum for technical co-ordination by contractors for DARPA working on ARPANET, US Defense Data Network (DDN), and the Internet core gateway system.

Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Host: 2308

 

 

1987:

BITNET and CSNET merged to form the Corporation for Research and Educational Networking (CREN), another work of the National Science Foundation.

Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Host: 28,174

 

 

1988:

Soon after the completion of the T1 NSFNET backbone, traffic increased so quickly that plans immediately began on upgrading the network again.

Merit and its partners formed a not for profit corporation called ANS, Advanced Network Systems, which was to conduct research into high speed networking. It soon came up with the concept of th3 T3, a 45 Mbps line. NSF quickly adopted the new network and the end of 1991 connected all of its sites by this new backbone.

Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Host: 56000

 


1990:

While the T3 lines were being constructed he Department of Defense disbanded the ARPANET and it was replaced by the NSFNET backbone. The original 05Kbs lines or ARPANET were taken out of service.

Tim Berners-Lee and CERN in Geneva implement a hypertext system to provide efficient information access to the members of the international high-energy physics community.

Backbones: 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Host: 313,000

 


1991:

CSNET (which consisted of 56Kbps lines) was discontinued having fulfilled its important early role in the provision of academic networking service. A key feature of CREN is that its operational costs are fully met through dues paid by its member organizations.

The NSF established a new network, named NREN, the National Research and Education Network. The purpose of this network is to conduct high speed networking research. It was it to be used to send a lot of the data that the Internet now transfers.

Backbones: Partial 45Mbps (T3) NSFNET, a few private backbones, plus satellite and radio connections - Host: 617,000

 

 

1992:

Internet Society is chartered.
World-Wide Web released by CERN.
NSFNET backbone upgraded to T3 (44.736Mbps)

Backbones: 45Mbps (T3) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, plus satellite and radio connections - Host: 1,136,000

 


1993:

InterNIC created by NSF to provide specific Internet services: directory and database services (by AT&T), registration service (by Network Solutions Inc.), and information services (by General Atomics/CERFnet).

Marc Andreessen and NCS and the University of Illinois develop a graphical user interface to the WWW, called "Mosaic for X".

Backbones: 45Mbps (T3) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, and 45Mbs lines, plus satellite and radio connections - Host: 2,056,000

 

 

1994:

No major changes were made to the physical network. The most significant thing that happened was the growth. Many new networks were added to the NSF backbone. Hundreds of thousands of new hosts were added to the INTERNET during this time period.

Pizza Hut offers pizza ordering on its Web page. First Virtual, the first cyberbank, opens.

ATM (Asynchronous Transmission Mode, 145Mbps) backbone is installed on NSFNET.

Backbones: 145Mbps (ATM) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, and 45Mbs lines, plus satellite and radio connections - Host: 3,864,000

 


1995:

The National Science Foundation announced that as of April 30, 1995 it would no longer allow direct access to the NSF backbone. The National Science Foundation contracted with four companies that would be providers of access to the NSF backbone (Merit). These companies would then sell connections to groups, organizations, and companies.

$50 annual fee is imposed on domains, excluding .edu and .gov domains that are still funded by the National Science Foundation.

Backbones: 145Mbps (ATM) NSFNET (now private), private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, and 45Mbs lines in construction, plus satellite and radio connections - Host: 6,642,000

 

 

1996

Most Internet traffic is carried by backbones of independent ISPs, including MCI, AT&T, Sprint, Uunet, BBN planet, ANS, and more.


Currently the Internet Society, the group that controls the INTERNET, is trying to figure out new TCP/IP to be able to have billions of addresses, rather than the limited system of today. The problem that has arisen is that it is not known how both the old and the new addressing systems will be able to work at the assume time during a transition period.

Backbones: 145Mbps (ATM) NSFNET (now private), private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, 45Mbs, and 155Mbps lines, plus satellite and radio connections - Host: over 15,000,000 and growing rapidly 

 

 

INTERNET SERVICE PROVIDERS (ISPs)

 

The ISP that you use might depend on your local area. This site doesn't recommend any particular service because they all provide one basic function-connecting you to the rest of the world. Each ISP is unique because each has slightly different format. You might try several before settling on one. Some services, such as AT&T Worldnet and America Online, offer trial memberships in which you can "try before you buy". Software used on the physical computer to make it a server that can speak the protocols of the Internet and respond accordingly is called Internet server software. The particular Internet server software manufactured by Microsoft is Internet Information Server (IIS).

For a client computer to be able to communicate with a server on the Internet, it must have a connection to the Internet. Then, when connected, it must have a way to contact and receive data from Internet servers through the various protocols. The connection is accomplished via an Internet service provider (ISP), such as America Online, CompuServe, MSN, AT&T Worldnet, MCI, or Sprint. The tool to communicate to the server and decipher the data returned by the Internet server is handled by the Internet browser, such as Microsoft's Internet Explorer or Netscape Navigator.

Where does Visual Basic fit in all this? Microsoft has positioned Visual Basic to play an important role on the client side and the server side. On the client side, you can use a derivative of Visual Basic, VBScript, to create West page programs that can run from within Internet Explorer. You also can use Visual Basic to create custom ActiveX components that you can embed in Web pages and run as any custom ActiveX component would.

 

 

PROTOCOLS

NETWORK PROTOCOLS:-

 

Protocols are the agreed-upon ways in which computers exchange information. Computers need to communicate at many levels and in many different ways, so there are many corresponding network protocols. Select the appropriate network and transport protocol or protocols for various token-ring and Ethernet networks. Protocol choices include:
" DLC
" Apple Talk
" IPX
" TCP/IP
" NFS
" SMB
There are protocols at various levels in the OSI mode. In fact, it is the protocols at a level in the OSI model that provide the functionality of that level. Protocols that work together to provide a layer or layers of the OSI model are known as a protocol stack, or suite.

 


HOW PROTOCOLS WORK:-

A protocol is a set of basic steps that both parties (or computers) must perform in the right order. For instance, for one computer to send a message to another computer, the first computer must perform the following steps. (This is a general example; the actual steps are much more detailed.)

1. Break the data into small sections called packets.
2. Add addressing information to the packets identifying the destination computer.
3. Deliver the data to the network card for transmission over the network.


The receiving computer must perform the same steps, but in reverse Order.


1. Accept the data from the network adapter card.
2. Remove the transmitting information that was added by the
transmitting computer.
3. Reassemble the packets of data into the original message.

Each computer needs to perform the same steps the same way so that the data will arrive and reassemble properly. If one computer uses a protocol with different steps or even the same steps with different parameters (such as different sequencing, timing, or error correction), the tow computers will not be able to communicate with each other

 

 

OSI MODEL

 

OSI, short for Open Systems Interconnection, is an international standard that defines seven layers of protocols for worldwide computer communication.

 

7- APPLICATION LAYER:

The application layer is the topmost layer of the OSI model, and it provides services that directly support user application, such as database access, e-mail, and file transfers. It also allows applications to communicate with applications on other computers as though they were on the same computer. When a programmer writes an application program that uses network services, this is the layer application program will access.

 

 

6- PRESENTATION LAYER:

The presentation layer translates data between the formats the network requires and the formats the computer expects. The presentation layer does protocol conversion, data translates, compression and encryption, character set conversion, and the interpretation of graphics commands.

 

 

5- SESSION LAYER:

The session layer allows applications on separate computers to share a connection called a session. This layer provides services such as name lookup and security to allow two programs to find each other and establish the communications link. The session layer also provides for data synchronization and check pointing so that in the event of a network failure, only the data sent after the point of failure need be re-sent.

The layer also controls the dialog between two processes, determining who can transmit and who can receive at what point during the communication.

 

 

4- TRANSPORT LAYER:

The transport layer ensures that packets are delivered error free, in sequence, and with no losses or duplications. The transport layer breaks large messages from the session layer (which we'll look at next) into packets to be sent to the destination computer and reassembles packets into messages to be presented to the session layer. The transport layer typically sends an acknowledgement to the originator for messages received.

 

 

3- NETWORK LAYER:

The network layer makes routing decision and forwards packets for devices that are farther away than a single link. (A link connects two network devices and is implemented by the data link layer. Two devices connected by a link communicate directly with each other and not through a third device.) In larger networks there may be intermediate systems between any two end systems, and the network layer makes it possible for the transport layer and layers above it to send packets without being concerned about whether the end system is immediately adjacent or several hops away.
The network layer translates logical network addresses into physical machine addresses (the numbers used as destination Ids in the physical network cards). This layer also determines the quality of service (such as the priority of the message) and the route a message will take if there are several ways a message can get to its destination. The network layer also may break large packets into smaller chunks if the packet is larger than the largest data frame the data link layer will accept. The network reassembles the chunks into packets at the receiving end.

 

 

2- DATA LINK LAYER:

The data link layer provides for the flow of data over a single link from one device to another. It accepts packets from the network layer and packages the information into data units called frames to be presented to the physical layer for transmission. The data link layer adds control information, such as frame type, routing, and segmentation information, to the data being sent.
This layer provides for the error-free transfer of frames from one computer to another. A Cyclic Redundancy Check (CRC) added to the data frame can detect damaged frames, and the data link layer in the receiving computer can request that the information be present. The data link layer

can also detect when frames are lost and request that those frames be sent again.

 

 

1- PHYSICAL LAYER:

The physical layer is simply responsible for sending bits (bits are the binary 1's and 0' of digital communication from one computer to another. The physical layer is not concerned with the meaning of the bits; instead it deals with the physical connection to the network and with transmission and reception of signals.

This level defines physical and electrical details, such as what will represent a 1 or a 0, how many pins a network connector will have, how data will b e synchronized, and when the network adapter may or may not transmit the data.

 

 

INTRODUCTION TO TCP/IP

 

TCP/IP is short for Transmission Control Protocol / Internet Protocol. TCP and IP were developed by a Department of Defense (DOD) research project to connect a number different networks designed by different vendors into a network of networks (the "Internet"). It was initially successful because it delivered a few basic services that everyone needs (file transfer, electronic mail, remote logon) across a very large number of client and server systems. Several computers in a small department can use TCP/IP (along with other protocols) on a single LAN. The IP component provides routing from the department to the enterprise network, then to regional networks, and finally to the global Internet. On the battlefield a communications network will sustain damage, so the DOD designed TCP/IP to be robust and automatically recover from any node or phone line failure. This design allows the construction of very large networks with less central management. However, because of the automatic recovery, network problems can go undiagnosed and uncorrected for long periods of time.
As with all other communications protocol, TCP/IP is composed of layers

 

 

HISTORY OF TCP/IP


In an effort to cut the costs of development, the Advanced Research Projects Agency (ARPA) of the Department of Defense (DOD) began coordinating the development of a vendor-independent network to tie major research sites together. The logic behind this was clear. The cost and time to develop an application on one system was too much for each site to re-write

the application on different systems. Since each facility used different computers with proprietary networking technology, the need for a vendor-independent network was the first priority. In 1968, work began on a private packet-switched network.

In the early 1970's, authority of the project was transferred to the Defense Advanced Research Projects Agency (DARPA). Although the original ARPAnet protocols were written for use with the ARPA packet-switched network, they were also designed to be usable on other networks as well. In 1981, DARPA switched their focus to the TCP/IP protocol suite, placing it into the public domain for implementation by private vendors. Shortly thereafter, TCP/IP was adopted by the University of California at Berkeley, who began bundling it with their freely distributed version of UNIX, Free BSD. In 1983, DARPA mandated that all new systems connecting to the ARPA network had to use TCP/IP, thus guaranteeing its long-term success.

 

 

COMPONENTS OF A TCP/IP NETWORK

To fully comprehend computer networking using the TCP/IP protocol, it is necessary to establish the various components of a computer network and the TCP/IP protocol. Think of a protocol as the language spoken by a computer. Packets are blocks of information passed between computers through a median like a telephone wire. Computer networks consists of hosts and networks. A host is essentially anything on the network that is capable of receiving and transmitting Internet Protocol (IP) packets on the network, such as a workstation or a router. These hosts are connected together by one or more networks. The IP address of any host consists of its network address plus its own host address on the network. Unlike other protocols, IP addressing uses one address containing both

network and host address. The TCP/IP protocol consists of two protocols as the name suggests. The Transmission Control Protocol (TCP) is the portion of the protocol that puts together and reads the software packets that are sent from host-to-host through the network. The IP is the addressing part of the protocol and is very complex.

Hosts and networks are considered as nodes. Each node's address is a 32-bit binary number (see figure 1). For convenience, this is broken into four 8-bit fields separated by a period called an octet. Most modern TCP/IP products represent these binary octets with their decimal number equivalents. The maximum decimal value of an eight-bit binary number is 255. The use of decimal numbers instead of binary numbers aids in readability. Although computers have no trouble dealing with 32-bit binary strings, humans typically have difficulty reading binary numbers.

Figure 1


8-bit IP Address Decimal Values 192.168.1.20

8-bit IP Address Binary Values 11000000.10101000.00000001.00010100

 

 

TRANSMISSION CONTROL PROTOCOL (TCP)


In contrast to the host-based network architectures of the time, TCP/IP proved to be useful on a variety of systems without a central controlling system. With TCP/IP, there is no central authority like IBM SNA networks. Nodes communicate directly among themselves, and each maintains complete knowledge about the available network services. If any host fails, none of the others knows or cares (unless they need data from the down machine, of course). TCP/IP networks were designed to be robust. In battlefield conditions, the loss of a node or line is a normal circumstance. Casualties can be sorted out later, but the network must continue operating. They automatically reconfigure themselves when something goes wrong. If there is enough redundancy built into the system, then communication is maintained. This is very similar to telephone systems found in most developed countries today.

Routers perform the task of moving traffic between networks. A node that needs to send data to another node on another network will send the data to a router, and the router will then send the data on to the destination node. If the destination isn't on a directly connected network, the router will send the data to another router for delivery. The TCP portion of the protocol was designed to recover from node or line failures where the network generates routing table changes to all router nodes. Since the update takes some time, TCP is slow to initiate recovery. The TCP algorithms are not tuned to optimally handle packet loss due to traffic congestion. Instead, the traditional Internet response to traffic problems has been to increase the speed of the lines and equipment in order to stay ahead of growth and demand.

TCP treats the data as a stream of bytes. It logically assigns a sequence number to each byte. The TCP packet has a header that says, in effect, "This packet starts with byte 379642 and contains 200 bytes of data." Often packets are sent via different routes and are received out of sequence. The receiver can detect missing or incorrectly sequenced packets and requests retransmission. TCP acknowledges data received and retransmits data that has been lost. The TCP design means that error recovery is done end-to-end between the two nodes. There is no formal standard for tracking problems in the middle of the network.

 

INTERNET PROTOCOL (IP) ADDRESSING SCHEME


The addressing scheme is broken down into three separate classes: A, B, and C. Some sites have one very large network with millions of nodes. They would use the first octet of the address to identify the network, and the remaining three octets would be used to identify the individual workstations. This is known as a "Class A" address. The most common users of "Class A" addresses are network service providers, who maintain extremely large and flat networks with millions of nodes. "Class A" addresses are identified by the first bit in the 32-bit address being set to "0". Since "Class A" networks only use the first 8 bits for the network number, this leaves only 7 bits for the network portion of the address. There are only 7 available bits, and only 128 possible networks. Network numbers 000 and 127 are reserved for use, so there are really only 126 possible networks (001 through 126). However there are 24 bits available for identifying nodes, for a maximum of 16,777,124 possible hosts for each network.

Another site may have thousands of nodes, split across many networks. They would use a "Class B" address. The first two octets are used to identify the network, and the remaining two octets are used to identify the individual nodes. Universities and large organizations are the most common users of "Class B" addresses. "Class B" addresses are identified by having the first two bits set to "0". Since they use the first two octets to identify the network, this leaves 14 bits to identify each network segment. Thus, there are a possible 16,384 "Class B" addresses, ranging from 128.1 to 191.254 (000 and 255 are reserved).

Finally, the most common address is the "Class C" address, where the first three octets are used to identify the segment, and the last octet is used to identify the workstations. These are good for sites that only have a few dozen nodes, although they may have many such networks. "Class C" addresses are identified by having the first three bits in the first octet set to "0". "Class C" addresses use the first three octets to identify the network, so there are 21 bits available. The possible network numbers range from 192.1.1 through 254.254.254, for a grand total of 2,097,152 possible networks. However, since there is only one octet left to identify the nodes, there can only be 254 possible devices on each segment (256, minus addresses 000 and 255).

 

 

ADVANTAGES OF TCP/IP

The advantages of TCP/IP include the following:

Broad connectivity among all types of computers and servers.
Direct access to the global Internet.
Strong support for routing.
Simple Network Management Protocol support (SNMP).
Support for Dynamic Host Configuration Protocol (DHCP) to dynamically assign client IP addresses.
Support for the Windows Internet Name Service (WINS) to allow name browsing among Microsoft clients and servers.
Support for most other Internet protocols, such as Post Office Protocol, Hypertext Transfer Protocol, and any other protocol acronym ending in P.

Centralized TCP/IP domain assignment, which allows internetworking between organizations.

If you have a network that spans more than one metropolitan area,
you will probably need to use TCP/IP. Think of TCP/IP as the truck of transport protocols. It's not fast or easy to use, but it is routable over wide, complex networks and provides more error correction than any other protocol. TCP/IP is supported on every modern computer and operating system. Like a truck, TCP/IP has some disadvantages:

 

 

DISADVANTAGES OF TCP/IP

TCP/IP has some disadvantages:

Centralized TCP/IP domain assignment, which requires registration effort and cost.
Global expansion of the Internet, which has seriously limited availability of unique domain numbers. A new version of IP will be able to correct this problem when it is implemented.
Difficulty of setup.
Relatively high overhead to support seamless connectivity and routing.
Slower speed than IPX and Net BEUI.

TCP/IP is the slowest of all the protocols included with Windows NT.
It is also relatively difficult to administer correctly, although new tools, such as DHCP, make it a little easier.

 

 

THE NEXT GENERATION OF TCP/IP

 

To address the shortage of available IP addresses, IPv6 is being developed. IPv6 addresses are 128-bit (16 octets) identifiers for interfaces and sets of interfaces. IPv6 will support four times the number of bits compared to IPv4 (128 bits versus 32). This corresponds to an address space (296) times the size of the IPv4 address space. Although this is an extremely large address space, the assignment and routing of addresses requires the use of hierarchical schemes that reduce the efficiency of the address space usage. Nevertheless, it is estimated that, in the worst case, 128-bit IPv6 addresses can accommodate 1018 hosts, which is still extremely large. Theoretically, this increased address capacity can accommodate one address for every three square meters of the earth's surface.


There are three conventions for representing IPv6 addresses as text strings: the preferred form (full IPv6 address form in hexadecimal values), the compressed form (with substitution of zero strings), and mixed form (convenient for mixed environments of IPv4 and IPv6 nodes). An example of the preferred form is:


FEDC:2A5F:709C:AEBC:97:3154:3D12

 

An example of compressed form is where zeroes in address values are omitted, example:

 

FF08::209A:61.

 

This new naming convention might seem complicated at first because decimal values are not used. It is still better than reading a string of 16 eight-bit binary numbers. This new naming convention assumes that only trained technicians will be working with the assignment of IP addresses. Therefore, readability by the average human is not necessary.

 

Home | Information Technology | Computer | Internet | Networking | CCNA | Abbriviations | About Us

 

www.computerliteracy.itgo.com

Best viewed in 800x600. with Internet Explorer 5.0 or later