too many computers out of more than one make of a network consisting of client and server
Kamis, 14 Juli 2011
Switches
Switches allow us to create a "dedicated road" between individual users (or small groups of users) and their destination (usually a file server). The way they work is by providing many individual ports, each running at 10 Mbps interconnected through a high speed backplane. Each frame, or piece of information, arriving on any port has a Destination Address field which identifies where it is going to. The switch examines each frame's Destination Address field and forwards it only to the port which is attached to the destination device. It does not send it anywhere else. Several of these conversations can go through the switch at one time, effectively multiplying the network's bandwidth by the number of conversations happening at any particular moment.
Another analogy which is useful for understanding how switches increase the speed of a network is to think in terms of plumbing. For sake of argument, assume that every PC on a network is a sink, and a 10 Mb/s connection is a 1/2-inch pipe. Normally, a 1/2-inch pipe will allow enough water to flow for one or two sinks to have enough water pressure to fill quickly. However, putting more sinks on that same 1/2-inch pipe will drop the water pressure enough that eventually the sinks take a very long time to fill.
To allow all sinks to fill quickly, we can connect the source of water to a larger (6-inch) pipe, and then connect each sink to the 6-inch pipe via its own 1/2-inch pipe. This guarantees that all sinks will have enough water pressure to fill quickly. See Figure One for an image of this concept.
Most network operating systems now use a "Client-Server" model. Here, we have many network users, or "clients" accessing a few common resources, or "servers." If we look at our previous highway example, an analogy would be to have a hundred roads for individuals all converging at two or three common points. If these common points are the same width as our individual roads, then they cause a major bottleneck, and the end result is exactly the same as if everyone was sharing one small road. This totally defeats the purpose of building all the individual roads in the first place.
The solution is to widen the road to our shared resource so that it can support the full load of most or all of the individual roads at once. In other words, we increase the bandwidth to our servers while connecting our clients at 10 Mbps. This is usually referred to as a High Speed Backbone. In networking slang, it is commonly called a "Fat Pipe."
This layout is splitting our overall network into four subnetworks. From left to right these subnetworks are outlined in Red, Green, Blue, and Violet. The Red subnetwork is a shared 10 Mbps setup, with all of the "Undemanding Users" sharing 10 Mbps of bandwidth. The Green and Blue subnets are dedicated 10 Mbps connections, sometimes referred to as "Private Ethernets." Here, each of the two power users has 10 Mbps of bandwidth dedicated to his or her machine, and this bandwidth is not shared with anyone else. Finally, we have our Violet subnetwork. This one is a Fast Ethernet setup running at a speed of 100 Mbps, and the bandwidth is shared by the two servers.
This is the most common way of setting up a switched network, and almost always results in an optimal price/performance ratio. We limit the amount of expensive Fast Ethernet hardware needed by only using it where its cost is justified by the performance it gives in handling the load at that point in the network, while leveraging an existing investment in 10 Mbps equipment in less demanding parts of the network. As a 10/100 switch is a fairly costly piece of equipment, each port we dedicate to a user is also rather expensive, so again these are only dedicated to individual users where that user's load justifies it. Finally, we can set up shared subnetworks which lump anywhere from two up to 100 users on one switch port.
Tidak ada komentar:
Posting Komentar