by Azam A. Mirza
In Chapter 17, "The BackOffice I-Net Toolbox," you learned about some of the tools and products Microsoft BackOffice provides for developing an effective Internet presence. This chapter focuses on the tools and techniques that make the Internet work. The commercialization of the Internet has caught the corporate world by storm and the possibility of being left behind has created a frenzy among corporations trying to adopt the Internet phenomenon.
The following sections describe some of the Internet tools and techniques, present scenarios for achieving business value from the Internet, and also discuss the increasing role of intranets in the corporate world.
In the early 1970s, the United States Defense Advanced Research Projects Agency (DARPA) became interested in the concept of a packet-switched network. DARPA consequently sponsored a research project to design and develop an advanced mechanism for facilitating the flow of information between distributed computers. The Advanced Research Projects Agency (ARPA), as it was known by the time the project got underway, funded the initial creation of the packet-switched network called the ARPANET, which would eventually grow into what is known today as the Internet.
ARPANET initially used the UNIX operating system, the Network Control Protocol (NCP), and a 50 kilobits per second (Kbps) line to connect four computers located at the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah at Salt Lake City. By the early 1980s, a new protocol for network connectivity and communicationsóthe Transmission Control Protocol/Internet Protocol (TCP/IP)ówas proposed and adopted for ARPANET use. As a public domain protocol, TCP/IP was widely accepted by the computing community for connecting to the fledgling ARPANET. By the mid 1980s, ARPANET had grown from the humble beginning of a four-computer network to more than 200 linked networks with thousands of computers.
Recognizing the potential of the ARPANET as a major network for research, education, and communications, the National Science Foundation (NSF) developed the NSFNET in the mid 1980s to provide a high-speed backbone for connecting to the Internet. From the mid 1980s to the early 1990s, the NSFNET backbone was consistently upgraded from a 56 Kbps line to a 1.54 Mbps line to a 25 Mbps line. The NSFNET played a major role in funding the advancement of the Internet as a viable tool for research and development.
Shortly after the NSFNET backbone was put into place, other government agencies in the United States and organizations abroad got into the act with the creation of backbone networks by the National Aeronautics and Space Agency (NSINET), the Department of Energy (ESNET), and numerous European organizations. Over a short period, the Internet grew to become a conglomeration of more than 5,000 networks in more than 60 countries with more than 5 million computers (see Figure 17.1).
Fig. 17.1
The Internet now provides connectivity to a large part of the world. (Illustration
courtesy of the Internet Society.)
With the eventual decommissioning of the original ARPANET and the NSFNET backbone network as it was outgrown, other Internet backbone network providers emerged. Currently, the backbone for the Internet is supplied by a group of national commercial providers, such as AT&T, MCI, and Sprint, as well as several smaller regional providers. Internationally, the backbone is supported by government organizations and private sector corporations.
The Internet of the 1990s is a substantial departure from the small network created some 20 years ago for research purposes. With the spread of the Internet's popularity into the business community and private sector, the number of computers connected to the Internet has doubled every year since 1988. It is estimated that by the middle of 1995, more than 20 million users were accessing the Internet and connecting to more than 7 million host computers worldwide.
The initial Internet was nothing more than a collection of connected networks that facilitated the flow of information between computer users. Because the Internet was largely based on computers that ran various versions of the early UNIX operating system, it was mainly a text-based, command-line environment. Also, the slow transmission lines connecting Internet users necessitated the use of techniques that required the least amount of bandwidth for transmission of data. Most early tools and applications used cryptic commands and minimal user interfaces to save transmission overhead. However, as the NSFNET backbone was upgraded to higher speeds and the network became capable of handling higher volumes of information flow, the Internet became a more user-friendly and flexible environment.
Efforts got underway to develop methods of accessing the databases available on the Internet. Early efforts led to the development of tools for searching and retrieving information, such as Archie, Veronica, Jughead, and Gopher. Archie was the first of such tools. It uses a simple method to catalog the information and files available on remote machines and makes the list available to users. Subsequent tools became more and more sophisticated in their approaches, leading to Gopherówhich required the use of special Gopher servers for collecting, storing, and displaying information for access by Internet users. Gopher is widely used for providing catalogs of books in libraries and phone book listings. Veronica and Jughead each provide additional indexing and searching capabilities for use with Gopher servers. Even though all these search and retrieval tools are very sophisticated, they all use text-based user interfaces.
NOTE: The names Archie, Veronica, and Jughead were patterned after the popular comic book characters and are trademarks of Archie Comic Publications, Inc. Veronica and Jughead are both acronyms. Veronica stands for Very Easy Rodent-Oriented Net-wide Index to Computerized Archives, and Jughead stands for Jonzy's Universal Gopher Hierarchy Excavation and Display. Gopher was named after its creators' alma mater: the University of Minnesota Golden Gophers.
With the arrival of the Microsoft Windows graphical user interface in the early 1990s, the continuing popularity of the Apple Macintosh user interface, and the X Windows environment on the UNIX operating system, the graphical user interface became the norm on the desktop rather than the exception. However, the Internet was still largely a text-based environment in a world becoming predominantly graphical. When it became apparent that it was possible to publish information on the Internet for access by the mass population, efforts got underway to develop tools for graphical display of the information.
The key factors responsible for the Internet's exponential growth are the development of the World Wide Web (WWW) (see the section "The World Wide Web" later in this chapter), and a user-friendly way to browse through the information available on it. The development of the Mosaic graphical Internet browsing tool (in 1993) at the University of Illinois National Center for Supercomputing Applications resulted in making the Internet more accessible and much easier to use. Mosaic provided graphical point-and-click navigation of the vast Internet expanse, and enabled people to experience the Internet without having to learn archaic and difficult UNIX utilities and commands.
Naturally this proliferation of users led to creative approaches to sharing information on the Internet, and as the amount of quality information from an expanding variety of sources increased, the Internet phenomenon became known as the Information Superhighway (see the section "I-Net Tools and Protocols" later in this chapter).
Through its first 20 years of existence, the Internet simply facilitated communications between researchers, scientists, and university students. Its primary value was in providing users with the capability to exchange electronic mail (e-mail) messages, participate in discussion groups, exchange ideas, and work with each other. The Internet was strictly a nonprofit domain; users resented and shunned anyone who tried to make a dollar from its use. However, in the last three to four years, the Internet has gone through a tremendous transformation. Cyberspace, as it is sometimes referred to, is a place for communicating, advertising, conducting business, and providing information to the masses or the individual.
The Internet is not a single network, but instead, a combination of thousands of networks spread throughout the world. As mentioned earlier, the initial ARPANET had humble beginnings as a four-node network. As time passed, more and more computers were connected to the initial network. In time, entire networks were connected to the base ARPANET. In the late 1980s, NSFNET came into existence and formed the backbone of a network that connected thousands of Department of Defense agencies, universities, businesses, and research institutions.
Today, the Internet consists of a backbone of networks being maintained by companies, such as AT&T, MCI, and Sprint. This backbone network typically runs at speeds of 45ñ100 Mbps. It also provides connectivity to the mid-level and regional networks being operated throughout the world.
For example, one of these is the Canadian Network (CA.net), which provides connectivity to most of Canada. These mid-level and regional networks then provide connectivity to local organizations, universities, and Internet Service Providers (ISPs) who provide or sell Internet connectivity to the commercial and private sectors.
NOTE: The multilevel connectivity of the Internet is transparent to the average user and does not in any way affect the capability to explore the Internet.
The Internet networks run on a network scheme called TCP/IP: a mechanism that breaks a message into small packets and transmits them over the network. In addition to carrying a piece of the actual message, each packet also carries an identification tag to facilitate the reassembling of the pieces back into the proper order when the message is received. The packets comprising a message are required to reach the proper destination, but they do not have to use the same route or travel through the Internet in a predetermined sequence. When all the packets containing a particular message have arrived at the destination, they are automatically put back together to re-create the original message. This is why the Internet is called a packet-switched network. The devices that collect these packets and determine the best transmission routes for them are called routers. See Chapter 9, "Using TCP/IP with Windows NT Server," for more information.
Domains provide the Internet with a way to define network groups, computer names, and addresses. All Internet computers are identified by a unique number called the IP (Internet Protocol) address, and even though millions of computers are connected to the Internet (and therefore are in the Internet domain), each IP address must be unique. Thus, an IP address consists of four three-digit numbers, called octets, each separated by a dot. The permissible range for an octet is 0 through 255 (for example, 200.215.180.210).
NOTE: Not all addresses are available for general use. Certain ranges in the Class A address space have been reserved for administrative purposes and future usage, thus reducing the total number of available addresses significantly.
NOTE: The enormous growth of the Internet has resulted in a shortage of available IP addresses. An inappropriate allocation of addresses in the Class B range has resulted in inefficient address usage. Many organizations have been assigned relatively large Class B addresses when a smaller Class C address range would have sufficed. Efforts are underway to rework the addressing scheme to alleviate the problems.
See "A Brief TCP/IP Tutorial," [Ch. 9]
Dividing the IP address space into classes makes it easier to distribute addresses using the top-down domain-level hierarchy method. Backbone providers are usually assigned Class A addresses, with subsequent lower-level domains assigned Class B, C, or D addresses under that range. For example, Macmillan Computer Publishing USA is assigned a Class C address of 199.177.202.X, which makes 256 Class D addresses available to Macmillan Computer Publishing to be assigned as they see fit within their domain. Macmillan's network provider has a Class B address assigned to it, 199.177.X.X, enabling it to assign more than 65,000 lower domain addresses to its customers. The IP address purposely distributes administration of address assignment to lower levels of the domain hierarchy for autonomy of operations.
NOTE: The InterNIC (Internet Network Information Center) is now ultimately responsible for processing all IP address requests and has the final authority for assigning IP addresses to interested parties.
Because it is difficult to remember cryptic numbers, the Internet uses a naming convention called the Domain Name Service (DNS), which translates IP addresses into names that are easier to remember. For example, the address of the Macmillan Computer Publishing USA Internet server is (currently) 199.177.202.10. Because it is difficult to remember such a number, the server has been assigned the name www.mcp.com. Just like the numeric IP addresses, domain names are also separated by dots for the purpose of creating a name-based domain hierarchy. In terms of domain names, the addresses are read backwards to identify the top-level domain, and so on. With www.mcp.com, the top-level domain is the com domain. The mcp domain is a mid-level domain that is part of the com domain, and www is one of the names of a computer in the mcp domain (it may have other names as well).
Six top-level domains are defined for the United States:
In addition, countries around the world have each been assigned a two-character top level domain name. For example, uk is for the United Kingdom, ca is for Canada, au is for Australia, and fi is for Finland. The DNS naming convention allows for 2ñ4 levels of nested domainsóa completely arbitrary selection based on ease of use and simplicity.
NOTE: Recently, the United States has also been assigned a two-character codeónamely, usóto identify computers and domains within the United States. However, the previously mentioned six top-level domains are still predominantly used to identify entities within the United Statesóa concession to the country of origin for the Internet.
Dividing computers and networks into domains distributes the administration of the naming system to lower levels of the hierarchy. Because every computer name on the Internet must be unique, it is easier to handle the administration by placing the responsibility on the network administrators to maintain uniqueness within their own domain. For example, the company G.A. Sullivan can have two computers named "server" and be legal as long as one is in the gasullivan.com domain (server.gasullivan.com), and the other is in the hamilton.gasullivan.com domain (server.hamilton.gasullivan.com). You cannot have both computers in the gasullivan.com domain because their name strings would be identical (server.gasullivan.com). However, because hamilton is a subdomain of the gasullivan.com domain, it is possible to have a second computer named "server" within the hamilton.gasullivan.com subdomain. Therefore, as long as you can append the computer name to a domain name to make the entire name string unique, you have satisfied the naming convention.
The Internet has become a great influence in the business world. Its impact was expected by some, but has taken the majority of people somewhat by surprise. Business entities of all types are still struggling to find ways to take advantage of this significant and relatively new resource. The enormous popularity of the Internet has affected the business community in the following ways:
These concepts are supported by the applications being implemented that utilize the Internetóand the WWW in particularófor the following business activities:
The Internet has been recognized by organizations around the world as a new and exciting opportunity for expanding their businesses to reach new and untapped customers. The WWW has leveled the playing field for all organizations big and small by making the medium of communication the same for all. Companies must compete with each other using the same methods and tools and on the strength of their products rather than a glamorous or convincing marketing campaign. This apparently level playing field makes it difficult for organizations to differentiate themselves from each other for attracting customers. However, it also pushes the envelope for coming up with new and exciting ideas for grabbing the attention of the audience.
The most common method of marketing information about a business is to develop a WWW site for introducing products. Many organizations have created WWW sites for introducing customers to their products in the hopes of enticing them to buy the products. Unique and innovative ideas are used by organizations to attract potential customers to their WWW sites. Figure 17.2 shows the Web site maintained by ProSoft for providing WWW users with information about its Internet and intranet training courses.
Fig. 17.2
ProSoft maintains a WWW site for introducing its training courses to potential
Internet-based customers.
Another, more common method of advertising on the WWW is to buy advertising space on WWW sites commonly visited by Internet users. Organizations providing services to Internet users, such as the Yahoo WWW search database site, sell advertising space on their WWW sites to finance their operations. Businesses can buy advertising space on other sites that grab users' attention and introduce them to products being offered by organizations around the world. These advertising spots usually also include hyperlinks to the WWW site being maintained by the advertiser, so the users can immediately navigate to that site if they are interested in checking out the products. Figure 17.3 depicts the Yahoo search database Web site with some advertisements by other businesses.
Fig. 17.3
The Yahoo WWW search database sells advertising spots on its site.
The Internet has always been used for the buying and selling of goods using the electronic medium. In the old days, most activities involving electronic commerce over the Internet were done between individuals. The Acceptable Use Policy (AUP) for the NSFNET backbone expressly prohibited the use of the Internet for commercial activities and, therefore, prohibited organizations and individuals from selling products over the Internet.
NOTE: The Acceptable Use Policy was defined as part of the NSFNET charter and governed all conduct over the Internet. It is a set of guidelines that is still adhered to in some form or another by all Internet users. It defines acceptable and unacceptable behavior for "citizens" of the Internet.
NOTE: Before commercial activity was acceptable on the Internet, the only means of buying or selling products over the Internet was through special UseNet newsgroups. These newsgroups were set up to enable users to trade with each other, and are still heavily used. For example, a newsgroup called misc.forsale.computers.monitors is for buying and selling computer monitors. The newsgroup rec.photo.marketplace is used exclusively to buy and sell photography equipment.
However, after the NSFNET backbone was decommissioned and the Internet became more commercial, changes were brought about to allow commercial activity on the Internet. The World Wide Web became the medium of choice for carrying out electronic commerce. One of the most famous and popular commercial WWW sites is the Internet Shopping Network. The ISN, as it is more commonly called, is one of the first WWW sites developed exclusively to sell products on the Internet. Figure 17.4 displays the home page of the Internet Shopping Network Web site.
òFig. 17.4
The Internet Shopping Network sells a wide variety of computer products to
WWW users.
Over the last couple of years, hundreds of WWW sites have sprung up for selling products to the Internet community. They are commonly referred to as online shopping malls. The Internet can be used to buy products in any imaginable category from clothes to skiing gear, computers, and boats.
Traditional businesses that have moved the fastest to embrace the Internet have usually been in the apparel retail business and the mail-order catalog business. Such companies as Lands' End, The Nature Company, The Limited, and Damark mail-order catalog have moved quickly to adapt their businesses to embrace the Internet.
The introduction of server products, such as the Microsoft Merchant Server, indicate the importance of electronic commerce on the Internet. These server products make it possible for corporations to engage in full retail sales operations. They support such operations as inventory management, order processing, account management, secure credit card processing, and order entry.
One of the major advantages of the Internet has been its capability to provide a means of communication among organizations. E-mail and mailing lists provide businesses with a convenient and relatively inexpensive mechanism for communicating with customers and vendors on a one-to-one basis. More and more, business users are using Internet e-mail to keep in touch with each other, exchange ideas and information, and receive customer feedback.
Mailing lists are maintained by organizations around the world to inform their customers of new happenings, product announcements, and other important events. One example of a mailing list is the Microsoft Windows NT Server mailing list available to registered users of Windows NT Server. It informs them of product updates, bug fixes, and upcoming events relating to the Windows NT Server product. For more information, see the sections "Electronic Mail" and "UseNet: Network Newsgroups" later in this chapter.
Many organizations have recognized the WWW as a great place to advertise, and they are using the WWW as a means of publishing information about their products and services. The popularity of the WWW as an advertising and marketing tool is evident in the number of sites set up solely for the purpose of advertising products and services. The Internet also presents an enormous opportunity for service providers to develop a presence on the WWW. Virtually overnight, hundreds of businesses specializing in WWW server setup, Web page creation, electronic publishing, and content creation have sprung up around the world. The home page for one such company is shown in Figure 17.5.
Fig. 17.5
WWW content creators, such as Digital Dimensions, Inc., also have their own
Web pages.
The World Wide Web is also being used as a medium for electronic publishing. Following are examples of popular uses of the Web:
Fig. 17.6
The ESPNet SportsZone provides up-to-the-minute sports news, scores, and highlights.
The Internet has also become a vehicle for providing help desk operations to the user community. Corporations worldwide are using the Internet as a means for providing product support and service. For example, General Life Insurance Company uses the Internet to provide its life insurance policy holders with information about their policy status. Figure 17.7 presents a view of the main page at the General Life WWW site.
Fig. 17.7
General Life Insurance Company's WWW site provides an automated mechanism
for clients to look up life insurance policy information.
Such companies as Microsoft and IBM have placed their entire help desk knowledge base on the Internet for their customers to use. Product support is fast becoming a very popular and a well-received application for the Internet.
To join the Internet community and provide other Internet users access to your Internet site, you must have an Internet connection. Internet connections are provided by commercial connection providers called Internet service providers (ISP). ISPs sell Internet connections based on a variety of pricing and connectivity schemes. When acquiring an Internet connection, consider the following points:
Each of these topics is discussed in the following sections.
Thousands of ISPs around the world provide Internet connectivity. ISPs exist in all flavors and sizesófrom such companies as Sprint, MCI, and AT&T to small local and regional organizations. Figure 17.8 illustrates the role an ISP plays in providing your Internet connectivity.
Fig. 17.8
ISPs provide connectivity by selling commercial Internet connections.
Pricing for Internet connections is based on the kind of connection you desire and its speed. Prices vary widely from one ISP to another. The larger national providers probably will cost more than the smaller, local ISPs. The larger ISPs claim reliability, customer service, and quality as their selling points. Local providers tout the personal attention and easy accessibility of their services. The field is so crowded and so many options are available that Internet connections have become a place for price wars. ISPs are constantly touting their lowered prices and increased bandwidth.
NOTE: Select an ISP that provides a good, reliable connection. Unreliable connection is the single biggest complaint against most ISPs.
The following are some issues you should consider when selecting an ISP:
Internet connections come in a variety of speed choices. For running information publishing sites, anything less than a 56,000 Kbps connection is not enough. Your choice of a connection speed depends on how much traffic you will experience. Start with a suitable speed and upgrade if you experience speed problems as more people find out about your site. Bandwidth options are available in a variety of speeds, as shown in Table 17.1.
CAUTION: I do not recommend modem connections for running a Web server site. The typical modem speeds of 14,400 Kbps to 28,800 Kbps are not fast enough to handle the traffic created by Web servers.
Connection Type | Bandwidth (Kbps) |
Leased line | 56 |
Frame relay | 56 |
ISDN | 128 |
Fractional T1 | 56ñ1,540 |
T1 | 1,540 |
T3 | 45,000 |
Leased line, Frame Relay, and ISDN connections can handle light traffic up to about 25ñ50 simultaneous connections. Fractional T1 and T1 lines can handle anywhere from 100 to 1,000 simultaneous users. Organizations that handle thousands of users at a time have multiple T1 connections or even a T3 connection.
TIP: Start with a connection speed that is fast enough to get you up and running, and test your Internet site setup. If in the future you need to upgrade, you can always do so.
This section discusses some other connectivity issues that you must tackle before your Internet connectivity is complete. These issues are as follows:
NOTE: You can contact InterNIC via e-mail at [email protected] or by phone. In the USA, call 1-800-444-4345. In Canada or elsewhere, call 1-619-455-4600. From overseas, you may need to use a country code to access the USA when dialing.
TIP: Domain name selection is an important step. Make sure that you select a name appropriate for your organization because you cannot change your domain name after it is registered. For example, Macmillan Computer Publishing USA has a registered domain name of mcp.com.
TIP: Windows NT Server can be set up to provide software routing of your TCP/IP packets. You can accomplish this task by using static routing tables and the ROUTE.EXE application included with Windows NT Server. Use of this utility is beyond the scope of this book. Consult Volume 2 of the Windows NT Resource Kit, Windows NT Networking Guide, for more information.
NOTE: Microsoft Exchange Server includes an SMTP gateway called the Internet Mail Connector as part of the server product.
If you are implementing an Internet solution with which users in your enterprise will get access to the Internet or will be using your intranet IIS machines, then you must spend some time and resources in educating your users about the Internet, its policies, and how to best utilize it.
You should develop a plan for user training and education. You need to address such user-related issues as the following:
NOTE: Your users will be representing your organization to the rest of the Internet world. Make sure that they represent your organization in a positive and acceptable manner. Set up organizational guidelines that define acceptable behavior.
User training is a time-consuming and costly undertaking. However, it is very important that users are properly educated and represent your organization in a positive manner. The time you invest in training will be repaid many times over.
One of the most important planning-stage steps is handling the security issues involved with operating Internet sites. When you set up your Internet site, pay special attention to security issues so that you do not provide access to sensitive company information to people from outside your enterprise.
Microsoft BackOffice products provide a flexible security model for making sure that your machines and network are safeguarded against unauthorized intrusion.
In addition to the security measures offered by Microsoft BackOffice products, you can take extra steps to ensure that your internal network and information are protected from potential security risks by doing the following:
NOTE: Firewalls, screening routers, and other similar security measures can limit who has access to your network. For example, you can filter user traffic based on IP addresses or security keys. If the remote client trying to connect to your machine is not on the list of allowed clients, it will not be granted access to your network.
See "Security for Your Internet Server," [Ch 19]
Security has been the number one concern for organizations trying to implement I-net solutions. Because the Internet evolved from an open and distributed environment, security concerns have plagued the network from the very beginning. As compared to the consumer market, the relatively slower acceptance of the Internet in the corporate community can be directly attributed to concerns and issues raised over the secure nature of information transmissions.
The sensitive nature of corporate data makes security of paramount importance in the corporate community. Standards committees have worked very hard to introduce standards for secure communications over the last year or so. The following are some of the tools and technologies developed to implement Internet security:
Encryption
Most Web server and browser software products support the Secure Sockets Layer (SSL) security mechanism for data encryption. SSL encrypts the data being transferred over the network and requires the client to present a valid key before the data can be decrypted. SSL is the most widely implemented security standard for data protection. In addition to encryption, SSL also provides user authentication functionality.
To use encryption technologies, such as SSL, an electronic "certificate" must be obtained from a certifying organization, such as VeriSign. Once installed, the certificates enable Web servers to send encrypted data and request encrypted responses from client browsers.
In addition, the Secure Electronic Transactions (SET) standard provides a secure method of enabling electronic commerce using WWW technology. SET was mainly designed to handle the exchange of credit card numbers over the Internet.
Firewalls
Firewalls are special software systems that safeguard a network from outside intrusion and from unauthorized access to the outside world from within an organization's network. A firewall is basically a gate keeper that checks to see if network traffic coming in or going out of a secured environment originates from an authorized user and has appropriate permissions to access the destination resource.
NOTE: An infamous firewall software product called SATAN created a considerable amount of publicity due to the reverse nature of the product. SATAN could be used to find loopholes in an organization's network infrastructure.
TIP: Proxy servers, such as Microsoft Access Server, are products that perform firewall functions to keep intruders from coming in or going out of a secure network. See Chapter 23, "Implementing Microsoft Proxy Server," for more information.
Electronic Signatures and IDs
Electronic signatures are a recent development that take the concept of security to the lowest and most personal level by assigning electronic identification to individual users of the Internet or intranet. With systems enabled for the use of electronic signatures, every operation performed by the user requires a valid electronic signature and the signature is then authenticated using a central database set up by the signature issuing organization to determine if the signature is valid.
Electronic signatures are a valuable addition to environments in which very sensitive information is being accessed, such as credit card processing systems. Users can use their credit cards to purchase products and then verify their purchase for approval by providing a matching electronic signature. They can also be used in e-mail systems to ensure that a message is indeed from the person who signed the message, and even that the contents of the message have not been altered. As a greater variety of human interaction is performed in networked environments, the capability to positively identify an individual becomes vital.
As the world races to connect to the Internet, an unexpected phenomenon is taking place. Corporations trying to figure out profitable and beneficial uses for this emerging global network have found a way to enhance the power and usefulness of their own internal networks as well. The intranet model, a younger and more contained sibling of the Internet, has emerged. An intranet is a scaled down version of the Internetónot in functionality or features, but in size and scope.
Intranets are internal, corporate-wide networks based on technology used for the Internet. Whereas the Internet provides corporate networks with connectivity to the global network of networks, intranets enable corporations to build internal, self-contained networks that have important advantages over existing network technologies. The intranet process model and its network connectivity architecture hold several major advantages over traditional networking architectures.
Not only is an intranet an excellent solution on its own, it can also be a great stepping stone to enterprise connectivity to the global Internet. Corporations hesitant about embracing the Internet can move at a slower pace and implement an internal intranet before making the leap to the global network. Since the technologies and tools used are identical, the move to Internet connectivity is only a matter of scaling an intranet to the bigger Internet. Figure 17.9 shows the corporate intranet Web site for G.A. Sullivan.
Fig. 17.9
G.A. Sullivan's internal Web site is used to disseminate corporate information
to employees.
Leveraging the technologies used throughout the global Internet framework and scaling them down to the enterprise level provides intranets with immense advantages. The Internet networking concept has served millions of interconnected networks very well over a long period of time. Utilizing the same tools and techniques to enhance the networking capabilities of an enterprise is a logical solution. Some of the advantages of an intranet are as follows:
Cost Effective Networking
Companies are realizing the cost effectiveness of the intranet network architecture. It is relatively inexpensive to set up an intranet. The technologies at the core of an intranet are open, standards-based, and widely implemented. The server software for setting up a World Wide Web (WWW) server, often referred to as a Web server, is usually inexpensive and is available from a multitude of vendors, such as Microsoft and Netscape. The client workstations on an intranet use inexpensive and easy to use Web browser software for connecting to the Web server. However, setup, installation, and operation of an effective corporate intranet is not a trivial task and requires a lot of time and effort on the part of the developers.
The networking infrastructure needed for setting up a basic intranet is already in place in most corporations. Note, however, that a rapidly growing intranet may eventually place extensive demands on an existing infrastructure in terms of hardware and networking resources.
A New Process Model
Intranets implement a hybrid architecture that brings together the best features of the client/server process model with those of the host-based process model. The intranet architecture is geared towards ease of deployment, centralized control of information, and simple administration of resources. In the intranet process model, the server is responsible for providing information and requested data to the intended users. In addition, the server holds the key to the graphical user interface presented to the user through the client browser software.
Client workstations (typically desktop PCs) use Web browser software to display information sent by the server. The server controls the layout and content of the information. This makes management and administration of information very reliable because it is centralized. However, the client is not just a dumb terminal. It does perform operations, such as information caching and local storage of information downloaded by the user. In other words, the intranet architecture is a process model that takes the best from the client/server world and combines it with the best attributes of the traditional host-based architecture employed by mainframes and minicomputers.
Platform Independence
An important advantage of intranets is their capability to bring together heterogeneous systems into a common inter-operable network. The corporate world has spent millions of dollars and many years trying to connect disparate and incompatible systems into a seamless and inter-operable network. The results have not been completely satisfactory. As the intranet architecture was developed from the ground up to be able to connect different systems together, it lends itself well to the corporate culture in most organizations where different systems, such as PCs, Macs, and UNIX-based workstations, must coexist.
Intranets presents enterprises with a unique and exciting opportunity to bring together technologies that were difficult to implement before. As mentioned previously, the cost effectiveness and platform independence of the intranet model bring true heterogeneous networking to the enterprise for the first time. As more and more enterprises understand the power of the Internet networking model, they are racing to embrace the technology. Corporations around the world are implementing intranet networks within their enterprise to avail themselves of these advantages.
It is important to note that intranets can be stand-alone networks or can be connected to the Internet at the same time. This provides your enterprise users with connectivity to the internal network and outside world using the same networking technologies and tools. Users do not have to use and adopt different methods and processes when moving between an intranet and the outside world. For example, currently most enterprises implement multiple e-mail systems to provide connectivity to the internal network and to the outside world. For each outside entity they must connect to, a corporation must embrace a system that is compatible with the other side. With Internet technology, all communications can occur over a single e-mail system.
Intranets provide added value to an enterprise in the following areas:
The following communication tools facilitate information exchange among users of a corporate intranet:
Most popular among communication tools are e-mail messaging systems and discussion group reading and posting programs. In addition, tools are becoming available for using the multimedia hardware on client workstations (speakers and microphone) to provide digital phone facilities over the Internet. Users can speak to each other using their computers rather than a traditional telephone. There are also tools for providing live feeds of television and radio broadcasts.
Live video and audio products provide support for streaming media. It is possible to transfer feeds from satellite transmissions directly to intranet networks and make them available to users. Users can watch live seminars from their desktops using their browsers, watch news broadcasts, or gain access to broadcasts of educational classes, for example. Such browsers as Navigator and Internet Explorer have add-on products available that make it possible to see and hear live video and audio broadcasts. Various vendors provide tools for this purpose; the ShockWave add-on product is the most widely used.
These tools can be effectively used within a corporate environment to provide users with intranet-based digital phone capabilities, the ability to bring corporate seminars and presentations to user desktops, and the capability to provide videoconferencing facilities for users in different geographic locations.
Intranets provide a means for corporations to use electronic publishing for disseminating information to their users. A Web site can be established that is used as the clearing house for information that needs to be transmitted to users. Users can connect to the Web site using their Web browsers and navigate through the Web pages to reach the information they are looking for. The common electronically published material on intranets includes the following:
Since the users are most often going to use a Web browser, it provides a single point of entry for a wide variety of tasks.
Workgroup applications provide users with productivity tools for better working together. The most popular workgroup application of the past decade has been the Lotus Notes system. In a major endorsement of intranets and intranet technologies, IBM has recently announced plans to migrate the Notes environment to the World Wide Web platform. Other vendors such as Netscape and Microsoft have also launched initiatives to address the growing need for intranet productivity tools.
Netscape's upcoming Communicator product is a combination of browser, e-mail, groupware, discussion, and collaboration software for facilitating group activities and enhancing team productivity.
Microsoft is building team collaboration and group connectivity features into its server and client product lines to facilitate better information retrieval and sharing among groups of users.
In a typical intranet, client workstations utilize a networked environment to connect to servers that facilitate requests for information and data. At its most basic level, an intranet requires a single Web server, a network infrastructure based on the TCP/IP networking protocol, and client workstations located within the same physical location. At the other end of the spectrum, an intranet can consist of hundreds of Web servers and thousands of client workstations located all around the world, connected through a complex array of networking components.
Whatever the scope, careful planning must go into setting up the infrastructure for a corporate intranet. In most large organizations, several servers will be needed to host the various components of an intranet. However, in smaller organizations, even a single server can be used to set up a basic intranet. Figure 17.10 presents a sample design for intranets that can satisfy the needs of many organizations.
Fig. 17.10
An intranet can comprise a collection of servers providing services to a heterogeneous
and dispersed client base.
Note that separate machines for various server functions are shown for clarity and ease of understanding. The different server software packages shown in Figure 17.10 (Web server, database server, and so on.) can be run on a single server. The choice of the number of Web servers and the selection of separate machines to perform the various operations will depend on the following:
It is always a good idea to start small with a basic intranet setup and build upon that foundation. Intranet server software packages are available for all popular server platforms, such as Windows NT, OS/2, Macintosh, NetWare, and various flavors of the UNIX operating system.
So far, UNIX has been the overwhelming server operating system of choice for setting up Internet and intranet Web servers. This is due to the fact that the Internet was originally designed on UNIX-based machines. The perceived notion is that UNIX is better suited for the high performance requirements of the Internet.
However, Windows NT has taken major strides in the last year as a viable Web server platform alternative, and is gaining popularity among corporations deciding to implement I-net technology. Ultimately, the selection of a server operating system to use depends more on corporate guidelines and the availability of in-house technical expertise than on what other organizations might be using. Each of the previously mentioned server operating systems is a worthy candidate for potential Web server implementation. It is unnecessary to set up a UNIX-based Web server in an organization where another server operating system is the standard.
Client workstations can be based on any operating system that has Web browser software available. Typical client workstations are a combination of PC, Macintosh, and UNIX systems. Finding a Web browser software package is usually not a problem for these operating systems because such companies as Netscape and Microsoft provide browser support for a wide range of client platforms.
If the corporate intranet is going to expand beyond the local area network (LAN) and encompass an organization's wide area network (WAN), then consideration must also be given towards maximizing throughput across slower WAN links.
Several strategies are available to handle an intranet spanning a WAN. Some corporations may opt to have individual divisions, groups, or subsidiaries install, operate, and manage their own intranets, creating a web of smaller intranets. Under such circumstances, cooperation and coordination between the various intranet administrators becomes an important consideration. A centrally created and managed intranet to service the entire organization is another possibility. This simplifies management, but may increase user training needs due to the IS support staff being centrally located.
It should become obvious from these considerations that installing, operating, and maintaining a corporate intranet is not a trivial task. Careful planning and design must go into implementing a sound intranet solution. However, with proper planning, an intranet can deliver measurable ease of use, inter-operability of dissimilar platforms, centralized administration, and levels of productivity, performance, and functionality as yet unattained by other networking technologies.
The WWW and the graphical Web browsers are probably the single most important reason for the widespread popularity of the Internet in the last few years. Web browsers, such as Netscape Navigator and Microsoft Internet Explorer, provide a graphical user interface that makes the information available on the Internet easier to find and more fun to investigate. By enabling users to gain access to the Internet in a point-and-click manner, a whole new class of users were introduced to the online world of computing. The Internet was quickly transformed from the network of networks to the Information Superhighway.
However, the Information Superhighway consists of much more than the pretty sites that encompass the World Wide Web. It is truly a global information store for the following:
These capabilities are available through the many services on the Internet, such as the following:
Each of these services is described in the following sections.
The World Wide Web consists of computers connected to the Internet throughout the world that provide graphical access to information stored on those computers. A WWW server may be set up for the purpose of information publishing, education, or to enable electronic commerce. The characteristic that makes the WWW unique is its capability to provide multimedia features, such as pictures, bitmaps, animation, video, and sound. Figure 17.11 shows a corporate informational WWW site set up by G.A. Sullivan.
NOTE: The term WWW site refers to a computer running WWW server software enabling Internet users to connect to it using WWW browsing software.
Fig. 17.11
Organizations such as G.A. Sullivan use the WWW as a means of providing corporate
information and news to their clients and prospective customers.
The following languages and interfaces are used by Web servers and browsers to facilitate communications between them:
Each of these is described in the following sections.
Hypertext Transport Protocol
Hypertext Transport Protocol (HTTP) defines a uniform method for connecting to Web servers using hypertext links. A user can click a link embedded within a Web document, and the system uses the appropriate protocol to connect to the system servicing that link. HTTP also defines mechanisms for retrieving documents from Web servers.
Such Web browsers as Internet Explorer and Navigator use the HTTP protocol to connect to Web sites. An URL address (defined in the section "Uniform Resource Locator" later in this section) is actually an HTTP addressing mechanism that provides the necessary information for making the HTTP link.
Hypertext Markup Language
The Hypertext Markup Language (HTML) is the scripting language used to define WWW server information content. HTML is a plain-text ASCII scripting language that uses embedded control codes like the word processors of old to achieve the formatting of text as well as graphics, images, audio, and video. The information is then stored as files on a WWW server. When a Web browser accesses the file, it is first interpreted by the browser, the control codes are decoded, and the formatted information is presented to the user in a graphical format referred to as a Web page.
The WWW and HTML were both developed at CERN (French acronym for European Laboratory for Particle Physics) in 1990. HTML 1.0 was the version used by initial Web browsers, such as Mosaic. The current standard being used is HTML3, which incorporates tables, figures, and other advanced features into WWW document creation. Figure 17.12 presents an HTML document for a Web site home page, and Figure 17.13 shows the page as it looks when viewed using Internet Explorer.
NOTE: CERN is a high-energy physics research center in Switzerland. Much cutting-edge computer science research is conducted at CERN.
NOTE: Mosaic is the graphical Web browser developed by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.
Fig. 17.12
HTML source documents are used to create WWW home pages.
Fig. 17.13
Microsoft Internet Explorer is the browser software that allows users to view
information on the WWW.
The HT in HTML stands for Hypertextóan important concept in WWW browsing. Hypertext, or hyperlink, refers to links defined within normal textual documents that enable a user to jump to another part of a document. The Windows help system is an example of a document-based system that uses hypertext links. By clicking highlighted or underlined words, users can navigate easily throughout the help systemóeven between different help files.
The WWW takes the same concept to the next level by enabling hypertext links between Web pages and even WWW sites. By clicking hypertext links defined on a Web page, users cannot only navigate within the same WWW site and view different pages, but can even jump to links pointing to sites on other WWW servers in remote locations. This powerful feature enables navigation of the Internet in a manner never possible before the advent of the WWW.
The HTML standard is platform-independent because it does not incorporate any codes that specify platform-unique parameters. For example, the codes might specify what size font to use, but not the type of font. That is left up to the browser to determine based upon the platform on which it is running and the fonts available on that machine.
Uniform Resource Locator
The WWW uses a standard called the Uniform Resource Locator (URL) for identifying services and machines available across the Internet. The URL is used for identifying the kind of service being used to access a resource, such as FTP, WWW, Gopher, and so on.
An URL uniquely identifies a machine, service, or product over the Internet. An URL has three parts:
For example, the URL for accessing the Microsoft FTP server for downloading a file called README.TXT is ftp://ftp.microsoft.com/readme.txt. This means that the service being used is FTP (the scheme), the server address is ftp.microsoft.com (the address), and the file to download is readme.txt (the path). To further explain the URL format, every URL scheme is followed by a colon (:), which is followed by two slashes to indicate that an address follows.
Simply put, URLs are a way for identifying resources on the Internet in a consistent manner.
Virtual Reality Modeling Language
The Virtual Reality Modeling Language (VRML) is a scripting language for displaying 3-D objects on the WWW. The VRML addition to the WWW enables the display of interactive 3-D worlds (for example, a virtual computer-generated model of a university campus) that can be traversed by the users accessing them. The capabilities and opportunities afforded by VRML are only limited by the imagination of the Web page author and available bandwidth. VRML promises to provide the capability to visit virtual worlds on the WWW, walk through them, and experience the multimedia power of the WWW. Microsoft maintains a sample Web page to demonstrate the capabilities of the VRML technology, as shown in Figure 17.14. If you would like to view such a site, use a search engine (such as http://www.yahoo.com or http://www.webcrawler.com) and execute a search on VRML.
Fig. 17.14
Microsoft Internet Explorer can be used to view VRML 3-D objects.
See "Significance of Bandwidth," [Ch 4]
Common Gateway Interface
The Common Gateway Interface (CGI) is a standard for extending the functionality of the WWW by enabling WWW servers to execute programs. Current implementations of WWW servers enable users to retrieve static HTML Web pages that can be viewed using a Web browser. CGI extends this idea by enabling users to execute programs in real time through a Web browser to obtain dynamic information from the WWW server. For example, a WWW site may provide up-to-the-minute stock quotes by executing a program that retrieves the stock prices from a database.
NOTE: The WWW Server at www.prosoft.com uses the method just described to provide users schedules of training classes for various locations around the country. It executes a program to retrieve quotes from an online database.
The CGI interface basically serves as a gateway between the WWW server and an external executable program. It receives requests from users, passes them along to an external program, and then displays the results to the user through a dynamically created Web page.
The most common usage of the CGI standard is for querying information from a database server. Users enter queries into a Web page, the WWW server accepts the data, sends it to the application or processing engine that will process the data, accepts the results back from the processing engine, and displays the results to the user.
The CGI mechanism is fully platform independent and can transfer data from any browser that supports CGI to any WWW server that also supports CGI. Because a CGI program is basically an executable file, there are no constraints on what kind of program can be executed through a CGI script. A CGI program can be written in any language that can create executable programs, such as C/C++, FORTRAN, Pascal, Visual Basic, or PowerBuilder. CGI programs can also be written using operating system scripts, such as Perl, UNIX script, or MS-DOS batch files.
Internet Server API (ISAPI) refers to a set of programming APIs introduced by Microsoft for writing applications for the Microsoft Internet Information Server. ISAPI provides the same functionality as CGI, but in a little different manner.
ISAPI applications are dynamic-link libraries (DLL) rather than executables or scripts. Thus, when an ISAPI DLL is installed on the IIS server, it is loaded the same time the server is started. This provides exceptional speed and power to ISAPI applications as compared to CGI scripts that must be loaded each time they are accessed. However, this also takes away some of the flexibility that CGI provides in terms of application development options and capability to upgrade installed application on-the-fly. To upgrade an ISAPI application would require bringing down the IIS server.
ISAPI has garnered a lot of attention due to its remarkable speed advantages and is an excellent solution for building Web-based applications. ISAPI applications can be developed in any development tool that enables the creation of 32-bit Windows NT DLLs. Examples of such tools include Visual C++, Visual Basic 4.0, Delphi, and PowerBuilder.
See "Using ISAPI," [Ch 19]
Common Internet File System (CIFS) is a protocol that enables remote sharing and opening of files across the Internet. Rather than downloading files before they can be opened or read, CIFS provides users with direct read/write access to the files. CIFS does away with the need for local storage of remote files before users can open the file.
CIFS is an enhanced version of the native file sharing system used by such operating systems as Windows 95, Windows NT, and OS/2, and is based on the Server Message Block (SMB) protocol. CIFS can run over TCP/IP links and is specifically optimized for slower dial-up connections. CIFS supports the Domain Name System (DNS) for name resolution across the Internet.
See "Name Resolution in the TCP/IP Environment," [Ch. 10]
UseNet is a distributed discussion system that consists of a set of discussion groups called newsgroups. Newsgroups are organized in a hierarchy based on subjects, such as recreation, sports, news, information, and religion. Each hierarchy includes anywhere from a few groups to thousands of groups and can be subdivided into minor hierarchies. They are organized similarly to the structure of a hard disk with its directories and subdirectories.
UseNet uses the TCP/IP-based Internet backbone as its transport mechanism. The standard used by UseNet news (or netnews) for propagation of UseNet traffic is called the Network News Transport Protocol (NNTP). NNTP is a higher level protocol that runs on top of the TCP/IP protocol to facilitate communications between various servers running the UseNet server software.
NOTE: In the early days of the Internet, another service, UUCP, was predominantly used for propagation of UseNet news. UUCP stands for UNIX to UNIX Copy. The service is still used, but has been mostly replaced by the faster and more flexible NNTP protocol.
NOTE: The recreation hierarchy, designated by the rec keyword, is subdivided into lower-level hierarchies, such as rec.arts, rec.games, rec.pets, rec.sports, rec.travel, and so on. The rec.pets newsgroup, for example, can be further divided into lower-level hierarchies, such as rec.pets.dogs, rec.pets.cats, and so on.
The newsgroups enable users with the appropriate news reading software to view articles posted to these groups, post their own articles, and reply to articles posted by other users.
After an article is posted into a UseNet newsgroup, the article is broadcast using the NNTP service to other computers connected to the Internet and running the NNTP service. UseNet groups are different from mailing lists because they require central storage of articles at an NNTP server computer for viewing by all members of the network connected to that computer.
At last count, there were more than 19,000 UseNet newsgroups for topics ranging from distributed computer systems to daily soap operas. A multitude of UseNet news reading software programs are available on the Internet as shareware programs and also as commercial packages. Some Web browsers, such as Microsoft Internet Explorer and Netscape Navigator, have news reading capabilities built in. Figure 17.15 shows the user interface of a shareware product called Free Agent used for reading newsgroups. The same company also offers a more complete version of the program called Agent version 1.0.
Fig. 17.15
Free Agent is a shareware UseNet news reading program with powerful features
such as news threading, offline reading, and filtering, etc.
The latest news reading programs, such as Free Agent, provide the following sophisticated features:
Electronic mail, or e-mail, is the most prevalent service on the Internet. E-mail enables Internet users to send messages to each other using a service called the Simple Mail Transfer Protocol (SMTP). Just like NNTP is used to transfer UseNet news, SMTP is used to transfer e-mail messages. SMTP also runs as a higher level service on top of the TCP/IP protocol.
E-mail provides a fast and cost-effective method of communication that is remarkably useful. E-mail messages can travel across the world in a matter of minutes to reach their destinations. Even though the WWW has been instrumental in bringing the Internet to the masses and transforming it into the Information Superhighway, the speed, effectiveness, and simplicity of the e-mail concept has made it the most widely used service over the Internet.
NOTE: The Internet community affectionately refers to the normal postal mail as snail mail due to its comparative slowness.
Numerous commercial and shareware software packages are available for receiving and sending e-mail messages using the SMTP service. Popular proprietary e-mail programs, such as Microsoft Mail, Microsoft Exchange, and Lotus cc:Mail have special interfaces for receiving and sending SMTP based e-mail. Microsoft Exchange Server is a part of BackOffice and is covered in Chapters 28 through 33.
An offshoot of individual, user-to-user e-mail connectivity is the invention of mailing lists. As the name suggests, mailing lists are similar in concept to the mass mailings you receive through the postal service. However, on the Internet, you must subscribe to a mailing list. Users just send a simple message to the mailing list administrator asking to be included in the list and shortly thereafter will start receiving messages originated from the list as normal e-mail messages.
Telnet is a service that enables users to log on to remote computers (that is, other computers on the Internet) and remotely execute programs on those computers. It is a mainstay of the old Internet days when users could log on to remote computers and run applications and programs on those computers. Today, Telnet is used mainly for remote administration of computer systems and for accessing Internet hosts to run command-line applications, such as Ping and Finger. Figure 17.16 presents a sample Telnet session for connecting to a remote computer system.
NOTE: Ping and Finger are two utility programs with origins in the UNIX operating system. Ping enables users to send an echo signal to another machine using the TCP/IP protocol. It is used to test the network connection. Finger enables users to find out how many users are connected to a machine and who they are. It also reports more detailed information about a particular user, if desired.
Fig. 17.16
Telnet can be used to connect to remote Internet hosts and execute programs
on the remote machine.
Telnet is inherently a command-line application interface that uses the popular VT-100 terminal emulation for displaying information to the user. When users log on to a remote machine using Telnet, they are presented with a command-line prompt. Users can execute any command-line programs using Telnet.
The TCP/IP protocol suite bundled with Windows NT includes a Telnet client as part of the TCP/IP utility programs.
NOTE: The Windows NT Resource Kit includes a Telnet server for accessing a Windows NT Workstation or Windows NT Server machine. The Telnet server enables users to log in to a machine using a Telnet client, such as the one included with Windows NT.
File Transfer Protocol (FTP) is one of the earliest and most commonly used services provided by the Internet. It is a simple file transfer utility that enables a user to transfer files between his computer and a remote computer running the FTP server service.
NOTE: FTP servers enable users to log on to a machine using FTP clients. All TCP/IP-based services require special server programs to facilitate access by client programs. Under Windows NT, most server programs, such as WWW Server, FTP Server, SQL Server, and Telnet Server run as operating system services.
The TCP/IP protocol suite included with Windows NT includes an FTP client and an FTP server as part of the TCP/IP utility programs.
The FTP system is platform-independent and facilitates file transfers between disparate systems, such as a UNIX workstation and a DOS PC. The FTP protocol allows for the transfer of both plain-text ASCII files and binary files. Figure 17.17 presents a sample FTP session with the ftp.microsoft.com server site.
Fig. 17.17
Use FTP to transfer files between a local computer and a host computer (both
on the Internet), such as Microsoft's FTP server.
FTP uses a command-line interface that requires users to know and understand FTP keywords for transferring files. Many graphical FTP programs also are available that facilitate point-and-click use of FTP services. The FTP standard defines a basic set of commands that must be supported by all implementations of the FTP service.
Because FTP uses clear text for transfer of information between client and server, it is not a very secure service. FTP should not be used for transferring sensitive files or information. For example, when connecting to a host computer, users are required to enter a logon ID and password. The logon ID and password are passed from the client to the server using clear text, and as such, there is a potential for the information being intercepted and viewed by a third party.
A powerful feature of the FTP service is its capability for anonymous logon. An anonymous logon is similar in concept to the guest account on Windows NT machines. It enables users to log on to a machine and have viewing and reading rights on predetermined directories and files on the system. Users can download files using anonymous FTP from any FTP server that allows anonymous logon. Anonymous FTP makes the FTP service more secure than normal by limiting the capabilities of the client.
See "A Flexible Set of Services," [Ch 3]
See "Managing Access to Shared Resources," [Ch 7]
Gopher, was developed for retrieving information from an online database, such as the IRS catalog of tax forms. As with all high-level TCP/IP services, Gopher also uses the client/server process model to facilitate the transfer of information between a user running the Gopher client and a Gopher server. Figure 17.18 shows a sample session with a Gopher server.
Fig. 17.18
Use a Gopher client to connect to a Gopher server.
Gopher is similar in concept to the FTP service; however, it only enables retrieval of information and has no provision for uploading information to the server. Nonetheless, Gopher does provide the following significant advantages over FTP:
Other services are on the Internet in addition to those already mentioned. Although they are perhaps less important, they still are useful, and you may find a reason to use some of them. The Internet is still growing, and new services are created and offered from time to time. The following is a brief list of a few additional services:
This is only a partial list of what the Internet offers, and as time goes by and technology advances, you are sure to see other applications added.
What does the future hold for the Internet? The Internet backbone transmission rates have progressed from an initial 50 Kbps to more than 100 Mbps. Plans are currently underway to upgrade the backbone to achieve transmission rates of 1 Gbps (gigabit per second) in the very near future. Significant increases in Internet transmission rates are necessary to accommodate the rapidly growing Internet population.
On the client access side, the advances have been even more dramatic. The Internet has evolved from a network of supercomputers accessible via 300 bps lines to a network of networks accessible from millions of locations at speeds up to 45 Mbps. Today, the lowest acceptable access speed for the average user connecting to the Internet is 28.8 Kbps, and 64ñ128 Kbps ISDN lines are quickly increasing in popularity.
In the near future you can only expect these access speeds to increase by an order of magnitude. Cable modems with transmission rate claims of 5ñ10 Mbps are already on the horizon. Advances in ATM (Asynchronous Transfer Modeóa new standard for network connectivity) technology promise to put 25ñ100 Mbps network connections on corporate desktops, and eventually this technology will trickle down to the individual user. Because ATM technology is scalable, transmission rates will only go up from here. Additionally, satellite connections will provide increased access speeds for the Internet community.
Advances on the software side will be as interesting. The current push is to develop standards for securing financial transactions on the Internet. When the standards are in place, you can expect to see a multitude of software applications ranging from secure online shopping to online banking and online trading of financial instruments. Electronic commerceóa means for doing financial transactions, such as credit card purchases, stock purchases, and automatic fund transfers from bank accountsówill become a common occurrence as users conduct their day-to-day business using the Internet.
Such tools as the InternetPhone, which enables users to carry on a real-time, audio-based conversation with others using the Internet, have already broken new ground toward a new class of multimedia applications for the Internet. With the increase in available access speeds, you can expect to conduct extensive audio and video-based interactive sessions on the Internet. The Internet will become a place where people can interactively communicate with one another.
If the Internet maintains its current rate of growthóand there is every indication that it willómost of the world's population could have Internet access by the end of the century.
Internet users will also have a host of professional and personal productivity applications and tools available for them on the Information Superhighway. The Internet is sure to provide the following:
The possibilities on the Information Superhighway are limitless. During the rest of this decade, you will witness the development of technologies and applications well beyond the dreams of the original Internet creators when they conceived the idea some 20 years ago.
This chapter presents the Internet and intranet as viable technologies for businesses to provide connectivity within their organizations and to the global network. For more information on the topics discussed in this chapter, see the following chapters:
© Copyright, Macmillan Computer Publishing. All rights reserved.