Theoretical Paper
- Computer Organization
- Data Structure
- Digital Electronics
- Object Oriented Programming
- Discrete Mathematics
- Graph Theory
- Operating Systems
- Software Engineering
- Computer Graphics
- Database Management System
- Operation Research
- Computer Networking
- Image Processing
- Internet Technologies
- Micro Processor
- E-Commerce & ERP
Practical Paper
Industrial Training
Dbms and Rdms
The Nature of the Network
Data traversing the modern global network must run the gauntlet of a wide range of modern communications technology. Each packet is transmitted, bounced, copied, and mangled so often during its brief life that, at times, it seems remarkable that it is delivered at all. Yet, despite its complexity, the modern network is robust and reliable. This is testimony to the rapid pace of developments in communications hardware but perhaps equally as much to the adoption of a consensus approach to design issues by developers.
Heterogeneity
Modern networks are remarkably heterogeneous. IBM compatibles and Macintoshes rub shoulders with workstations and mainframes; DOS and Windows platforms share data with UNIX and every other operating system; Ethernet and token ring converse with FDDI and ATM; and all of this takes place over a chaotic mixture of physical media. A single network packet might pass through thinnet, twisted-pair, and fiber-optic cables and laser line-of-sight links before being bounced off a satellite to pass through a similar mix of media at the receiving end.
Modularity
Heterogeneity is made possible by the growing emphasis on modularity in the design of network hardware and software. There was a time when many developers produced network environments that used proprietary network software running over proprietary hardware, using a single type of cable. These simple networks were attractive to many consumers who required a simple, out-of-the-box network solution. Inevitably, however, requirements arose that could not be met by a purely proprietary system. For example, the consumers decided to link their network to another different type of network, or they discovered that they needed to share a resource over the network only to find that it was incompatible with their network product.
The modular approach overcame these difficulties to a large extent. Network products began to focus on a small area of the network landscape. By the late 1980s, instead of having to decide which network to purchase, consumers found themselves making separate choices for network adapters, cabling systems, interconnection devices, network operating systems (NOSs), and network applications.
Increasing modularity in network software has had an especially profound impact. This is particularly apparent on the desktop, where network protocols can be mixed and matched to suit the user's needs. Modularity's effects are becoming increasingly marked on a more macroscopic level, as network administrators combine NOSs to provide the required combination of services rather than rely on a single product.
Standards
The modular approach is possible only when adequate standards are agreed upon. Nobody can predict exactly what a user is going to want to send over the network, nor can anyone know with certainty what the next stage of development in network technology will bring. The only way a developer can be sure that its product will work with the rest of the network is by adhering rigorously to the recognized standards. Each hardware manufacturer must know exactly what it can expect as input to its part of system and what its system must generate as output. Software developers work to similar specifications.
The OSI network model discussed in chapter 3, "The OSI Model: Bringing Order to Chaos," has been extremely influential in this regard, allowing for a clear, logical delineation of responsibilities between the many components of a network. At the desktop level, Novell's ODI specification has allowed what would previously have been unimaginable--several different network protocols running simultaneously and smoothly on the same hardware. In both cases, the adoption of a standard has made development possible.
The Scope of Networking
The rapid growth in the number of networked computers over the last decade or so has been dramatic. One index of this growth is the number of Internet host computers, which is now in excess of six million. Figure 1.1 depicts the increase over several years to mid-1995, using data produced by Network Wizards and available on http://www.nw.com/. The number of people with Internet access is extremely difficult to quantify but is currently in excess of twenty million worldwide.
Estimates in the growth of Internet connections, while difficult, are at least possible because of the integrated nature of the Internet. The growth in enterprise-level computing is similarly dramatic but impossible to measure in the same way, as many LANs are isolated from the world beyond the enterprise.
This is the growth of Internet hosts from 1988 to 1995.
Growth Rate
A remarkable feature of this expansion is that the rate of growth has continued to increase over a period of years, which has led to predictions of one Internet node for every human being in the planet by the early years of the next century. This acceleration in growth rate is because when two networks are connected, both are expanded and enhanced. Connecting thousands of LANs made the combined resources of the Internet so vast that it eventually became unrealistic for network planners to attempt to rival it; better to connect to it, take advantage of it, and at the same time, contribute to it.
This exponential growth cannot continue indefinitely but by the time it begins to slow, it is likely that Internet access will be as commonplace as cable television. Networks in the home may have seemed unlikely a few years ago, yet some homes have already been fitted with network cabling. The emergence of cost-effective broadband technologies such as ISDN suggests that domestic network access at speeds is not far away.
Network Awareness
In tandem with this rapid growth in the extent of networks in general and the Internet in particular, there has been a significant change in the general level of awareness of networks. Most computer literate people have by now at least heard of the Internet; many who are not otherwise familiar with computers have also heard of it. In fact, a substantial number of people are now buying their first computer for the purpose of accessing the Internet. Access to the network has become an end in itself.
This heightened awareness of networking has been fueled by the rapid expansion of networks into everyday life, which has in turn fueled the heightened awareness of networking. Networking the office is no longer a matter of office equipment, the impact of which can be compared with the arrival of a new fax machine. Instead, it brings a fundamental change in the way the enterprise functions internally--in the relationship between the enterprise and the world outside and in the staff's perception of their relationship to the world beyond the office. There is now an increasing awareness within organizations that networking can make this type of change.
Network Readiness
Application software has been tracking this shift in awareness for some time. Not so many years ago, many applications would balk at working in a networked environment; how many packages refused to use drive letters beyond E? Yet today, more and more packages are claiming to be "network ready" or "network aware" as software developers increasingly recognize networked computers as the norm rather than the exception. Recent developments in software monitoring and license enforcement as described in appendix D, "Software License Metering," reflect this change.
Technology
The technological innovations that have driven these changes are not as extensive as may be imagined. Most of the building blocks of the modern network--data packets, protocols, cabling systems--were invented by Rank Xerox in the 1970s. While the speed and capacity of network hardware have developed enormously in recent years, and while today's networks are built with increasingly sophisticated components, there has been nothing like the paradigm shift introduced by the invention of, say, the transistor.
Today's Networks
The rapid pace of change in network hardware and software is reflected in the range of systems in use across the world at the present time. Most network installations, particularly those in the medium and large categories, consist of a mixture of old and new hardware and software.
Legacy Systems
Some of the most out-of-date network equipment is found at some of the most progressive institutions. These were the pioneers, the ones who invested in early systems at a time when networking was still experimental. They had to struggle with relatively primitive equipment and endure frequent crashes and network hiccups, only to find that their system was obsolete almost as soon as it was stable.
They then faced a dilemma. Should they write off the old equipment to experience and graduate to a better system, incurring high capital and manpower costs? Or did it make more sense to stick with the old system, enhancing it where possible and hoping to switch at some point in the future? In many cases, the scale of investment required to change was too high. As a result, the existing systems were retained after they had ceased to be worth the maintenance effort. New equipment was brought in on a piecemeal basis to shore up the creaking system until, finally, something gave and the required investment was made.
This reluctance or inability to move on to more modern systems has left a considerable amount of old equipment in use. The new products are faster, more modular, and more robust, but most network administrators have to support at least some software and hardware that they would rather see scrapped.
The Modern Network
The modular nature of modern network products and the lessons learned about investment and obsolescence have helped to make recent networks more manageable. They are designed with a view to the finite life span of the components. Obsolescence is planned rather than being allowed to creep up, and the initial investment is made with the understanding that substantial ongoing resources, both monetary and human, will be required to maintain the network as a functioning entity. There is a realization that no matter what capacity is provided, the users will almost certainly exhaust it within a matter of months.
Hardware. Computers bought for network access are most likely to be IBM-compatible PCs, Macintoshes, and UNIX systems, in that order. Apple in the past relied on superior technical innovation, particularly in the network arena; this having failed, Apple has now staked a good deal on its PowerPC. As yet, this has not made a significant impact.
Many of the computers connected to networks around the world were of course bought before any network was available. These legacy systems will be around for the foreseeable future, and they form a significant part of most network communities. They make their presence felt by being less powerful and less well integrated into the network than their more modern cousins.Question : Difference between DBMS & RDBMS.
Solution :
DBMS ( Foxpro) | RDBMS ( oracle) |
A computer program that manages a permanent , self-descriptive repository of Data | A computer program that provides abstraction of relational tables to the user. It provides 3 kinds of functionality 1) present the data in the form of tables. 2) provides operators for manipulating these tables. 3) supports integrity rules on tables . |
No user demarcation exits | User Demarcation exists |
No user hierarchy | User hierarchy exists |
Uses B tree | Uses B+ tree |
Maintains no relations | Maintains the referential integrity |
Less security | Highly secured |
Less expensive | More expensive |
Follows lesser number of Codd. rules | Follows lesser number of Codd. rules |
No triggers | Triggers exists |
Does not support multi users | Supports multi users |
Exact location of the data can be known very easily as every DBF file has a file name that can be accessed from the OS level. | Exact location of the data is not known very easily because the concept of table space exits. |
Good for small sectors | Good for large sectors |