Welcome to Part 2, this part will provide the background and current topology that is to be replaced, the real fun will start in Part 3. I know you are all impatient to get started configuring switches, routers and servers but I think that the design and the reasons behind the design are as important (if not more so) than the actual configuration. As always I know I am not perfect and all knowing so if from the background and the following tutorials you think I have missed something or could have done something better then please speak up, send me a message via the contact page or leave a comment.
NINet has grown since the last series and now has a floor of approximately 40 developers developing video solutions. The current solution was put together in bits and pieces and is resulting in poor performance and frequent outages. It must be stressed that this is a development network and as such cannot be locked down developers need to be able to connected unmanged switches (much to my irritation) and connect devices at will.
- Developers plugin devices with DHCP enabled, this causes (frequent) address conflicts on the network for all users.
- Unmanaged switches plugged in by developers frequently end up causing routing loops.
- Numerous choke points.
- Cumbersome to mange due to number different switches (os differences) and each switch is a separate entity.
- One of the 3com switches is faulty and falls over every so often requiring a reboot of the switch.
- CR2 needs 96 Ports
- VL1 needs a PoE switch & 96+ Ports
- Try to keeps costs down but don’t skimp
- Needs to be easily extendable if there is growth.
- Minimise downtime.
Now you have all cringed and maybe even curled up into a ball sucking your thumb while rocking back and forth slowly, it’s time to solve this nightmare of a situation.
Lets break it down into problems & solutions:
- Developers plugin devices with DHCP enabled: Ok lets split the network up in order to isolate the issues (VLANS & Subnets).
- Unmanaged switches plugged in by developers: Sounds like a job for the Spanning-Tree Protocol.
- Choke points: We can either get stacked switches or aggregate ports to increase our uplinks.
- Cumbersome to mange due to number different switches: Get stacked switches and standardise on vendor.
- 2x 3750x 48 port with 10Gbic Modules & IPServices
- The 3750X has the Ten Gig support which we need for the inter-room link, this will also allow us to reuse the modules from the 3560s which will save some money.
- 3750s are stackable which allows us to use all the front ports for data connections and not uplinks
- The stacking also will reduce the number of switches we have to manage by providing a single management point per stack.
- IP Services will allow for EIGRP to facilitate inter vlan routing and also will mean if we need to grow to a third stack then we just get a new 3750x and replicate the tolology i.e. setup local vlans and adding router to ASN.
- 2x 3750G 48 port with IPBase
- High port density
- No need for IPServices as all traffic has to hit the 3750x to go outside of the stack this also reduces the cost.
- 1x 2750G 24 port PoE with IPBase
- PoE requirement
What is not in the diagram is that we have an additional VLAN 2 using the old 192.168.2.0/22 which will be used for the transition period in order to minimise downtime. This means the new switches can be used before we are ready to re-ip everyone, all we do is setup an interface as a gateway on this network and then routing should take care of the rest.
- Part 1
- Part 2