Tuesday, 1 May 2012


The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (often called TCP/IP, although not all applications use TCP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support email.
Most traditional communications media including telephone, music, film, and television are reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet Protocol (VoIP) and Internet Protocol Television (IPTV). Newspaper, book and other print publishing are adapting to Web site technology, or are reshaped into blogging and web feeds. The Internet has enabled or accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2011, more than 2.2 billion people – nearly a third of Earth's population — use the services of the Internet.
The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

The Internet standards describe a framework known as the Internet protocol suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the application layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program. Below this top layer, the transport layer connects applications on different hosts via the network (e.g., client–server model) with appropriate data exchange methods. Underlying these layers are the core networking technologies, consisting of two layers. The internet layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Last, at the bottom of the architecture, is a software layer, the link layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware, which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description or implementation; many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.
The most prominent component of the Internet model is the Internet Protocol (IP), which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and in essence establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011,when the global address allocation pool was exhausted. A new protocol version, IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion.
IPv6 is not interoperable with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for networking devices that need to communicate on both networks. Most modern computer operating systems already support both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.


Internet packet routing is accomplished among various tiers of Internet Service Providers.
Internet Service Providers connect customers (thought of at the "bottom" of the routing hierarchy) to customers of other ISPs. At the "top" of the routing hierarchy are ten or so Tier 1 networks, large telecommunication companies which exchange traffic directly "across" to all other Tier 1 networks via unpaid peering agreements. Tier 2 networks buy Internet transit from other ISP to reach at least some parties on the global Internet, though they may also engage in unpaid peering (especially for local partners of a similar size). ISPs can use a single "upstream" provider for connectivity, or use multihoming to provide protection from problems with individual links. Internet exchange points create physical connections between multiple ISPs, often hosted in buildings owned by independent third parties.
Computers and routers use routing tables to direct IP packets among locally connected machines. Tables can be constructed manually or automatically via DHCP for an individual computer or a routing protocol for routers themselves. In single-homed situations, a default route usually points "up" toward an ISP providing transit. Higher-level ISPs use the Border Gateway Protocol to sort out paths to any given range of IP addresses across the complex connections of the global Internet.[citation needed]
Academic institutions, large companies, governments, and other organizations can perform the same role as ISPs, engaging in peering and purchasing transit on behalf of their internal networks of individual computers. Research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET. These in turn are built around smaller networks (see the list of academic computer network organizations).
Not all computer networks are connected to the Internet. For example, some classified United States websites are only accessible from separate secure networks.

General structure
The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.
Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins in the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated. The Internet structure was found to be highly robust[33] to random failures and very vulnerable to high degree attacks.

The standards group CCITT defined "broadband service" in 1988 as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s. The US National Information Infrastructure project during the 1990s brought the term into public policy debates.
Broadband became a marketing buzzword for telephone and cable companies to sell their more expensive higher data rate products, especially for Internet access. In the US National Broadband Plan of 2009 it was defined as "Internet access that is always on and faster than the traditional dial-up access". The same agency has defined it differently through the years.
Even though information signals generally travel nearly the speed of light in the medium no matter what the bit rate, higher rate services are often marketed as "faster" or "higher speeds". (This use of the word "speed" may or may not be appropriate, depending on context. It would be accurate, for instance, to say that a file of a given size will typically take less time to finish transferring if it is being transmitted via broadband as opposed to dial-up.) Consumers are also targeted by advertisements for peak transmission rates, while actual end-to-end rates observed in practice can be lower due to other factors.

To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller. The combination of computer and interface controller is called a station. All stations share a single radio frequency communication channel. Transmissions on this channel are received by all stations within range. The hardware does not signal the user that the transmission was delivered and is therefore called a best-effort delivery mechanism. A carrier wave is used to transmit the data in packets, referred to as "Ethernet frames". Each station is constantly tuned in on the radio frequency communication channel to pick up available transmissions.

Internet access
A Wi-Fi-enabled device can connect to the Internet when within range of a wireless network connected to the Internet. The coverage of one or more (interconnected) access points — called hotspots — can extend from an area as small as a few rooms to as large as many square miles. Coverage in the larger area may require a group of access points with overlapping coverage. Outdoor public Wi-Fi technology has been used successfully in wireless mesh networks in London, UK.
Wi-Fi provides service in private homes, high street chains and independent businesses, as well as in public spaces at Wi-Fi hotspots set up either free-of-charge or commercially. Organizations and businesses, such as airports, hotels, and restaurants, often provide free-use hotspots to attract customers. Enthusiasts or authorities who wish to provide services or even to promote business in selected areas sometimes provide free Wi-Fi access.
Routers that incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, often set up in homes and other buildings, provide Internet access and internetworking to all devices connected to them, wirelessly or via cable. With the emergence of MiFi and WiBro (a portable Wi-Fi router) people can easily create their own Wi-Fi hotspots that connect to Internet via cellular networks. Now Android, Bada, iOS (iPhone), and Symbian devices can create wireless connections.[24] Wi-Fi also connects places that normally don't have network access, such as kitchens and garden sheds.

City-wide Wi-Fi
An outdoor Wi-Fi access point
In the early 2000s, many cities around the world announced plans to construct city-wide Wi-Fi networks. There are many successful examples; in 2005 Sunnyvale, California, became the first city in the United States to offer city-wide free Wi-Fi, and Minneapolis has generated $1.2 million in profit annually for its provider.
In 2004, Mysore became India's first Wi-fi-enabled city and second in the world after Jerusalem. A company called WiFiyNet has set up hotspots in Mysore, covering the complete city and a few nearby villages.
In May 2010, London, UK, Mayor Boris Johnson pledged to have London-wide Wi-Fi by 2012. Several boroughs including Westminster and Islington already have extensive outdoor Wi-Fi coverage.
Officials in South Korea's capital are moving to provide free Internet access at more than 10,000 locations around the city, including outdoor public spaces, major streets and densely populated residential areas. Seoul will grant leases to KT, LG Telecom and SK Telecom. The companies will invest $44 million in the project, which will be completed in 2015.

Campus-wide Wi-Fi
Many traditional college campuses provide at least partial wireless Wi-Fi Internet coverage. Carnegie Mellon University built the first campus-wide wireless Internet network, called Wireless Andrew at its Pittsburgh campus in 1993 before Wi-Fi branding originated.
In 2000, Drexel University in Philadelphia became the United States's first major university to offer completely wireless Internet access across its entire campus.

Direct computer-to-computer communications

Wi-Fi also allows communications directly from one computer to another without an access point intermediary. This is called ad hoc Wi-Fi transmission. This wireless ad hoc network mode has proven popular with multiplayer handheld game consoles, such as the Nintendo DS, Playstation Portable, digital cameras, and other consumer electronics devices. Some devices can also share their Internet connection using ad-hoc, becoming hotspots or "virtual routers".
Similarly, the Wi-Fi Alliance promotes a specification called Wi-Fi Direct for file transfers and media sharing through a new discovery- and security-methodology. Wi-Fi Direct launched in October 2010.

Advantages and limitations

Wi-Fi allows cheaper deployment of local area networks (LANs). Also spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs.
Manufacturers are building wireless network adapters into most laptops. The price of chipsets for Wi-Fi continues to drop, making it an economical networking option included in even more devices.
Different competitive brands of access points and client network-interfaces can inter-operate at a basic level of service. Products designated as "Wi-Fi Certified" by the Wi-Fi Alliance are backwards compatible. Unlike mobile phones, any standard Wi-Fi device will work anywhere in the world.
Wi-Fi Protected Access encryption (WPA2) is considered secure, provided a strong passphrase is used. New protocols for quality-of-service (WMM) make Wi-Fi more suitable for latency-sensitive applications (such as voice and video). Power saving mechanisms (WMM Power Save) extend battery life.

Spectrum assignments and operational limitations are not consistent worldwide: most of Europe allows for an additional two channels beyond those permitted in the US for the 2.4 GHz band (1–13 vs. 1–11), while Japan has one more on top of that (1–14). As of 2007, Europe, is essentially homogeneous in this respect.
A Wi-Fi signal occupies five channels in the 2.4 GHz band. Any two channels numbers that differ by five or more, such as 2 and 7, do not overlap. The oft-repeated adage that channels 1, 6, and 11 are the only non-overlapping channels is, therefore, not accurate. Channels 1, 6, and 11 are the only group of three non-overlapping channels in the U.S.
Equivalent isotropically radiated power (EIRP) in the EU is limited to 20 dBm (100 mW).
The current 'fastest' norm, 802.11n, uses double the radio spectrum/bandwidth (40MHz) compared to 802.11a or 802.11g (20MHz). This means there can be only one 802.11n network on the 2.4 GHz band at a given location, without interference to/from other WLAN traffic. 802.11n can also be set to use 20MHz bandwidth only to prevent interference in dense community.

No comments:

Post a Comment