Saturday, February 27, 2021

Stacking HPE 5130 Switches

The switches used in this example are HP 5130s.

WHAT ARE STACKED SWITCHES?

Stacked switches are multiple switches connected together in a cluster to appear as a single logical device. The benefits to this is ease of management with less IP addresses to document as well as redundancy as you can have multiple uplinks so if a link does go down, clients can go over the additional uplinks.

You can even use link aggregation on the uplinks to simplify spanning-tree topologies as well as load balance traffic up to the distribution layer switches.

MY LAB

Below is an example of the lab I am using for this. It's two HPE 5130s with stacking cables connected to the Ten-GigabitEthernet ports. The image below is of the stack completed.

The cables are connected last, once all the configuration is in place you then slot in the stacking cables.

CONFIGURING THE FIRST SWITCH

The first switch is the top one, it will need to be numbered to represent it's position in the stack (as it's the top switch it will be switch 1). You can use the command 'display irf' to show the IRF topology. AS the screenshot below shows, switch 1 is already set as Member 1.

Now that it is numbered correctly, the stacking ports need configuring. First you need to shutdown the ports you will be using, in this case Ten-Gig 1/0/49 and Ten-Gig 1/0/50, then assign them to an IRF port.

You create the IRF ports using the command 'irf-port <switch-num>/<port-num>' so I will be using irf-port 1/1 and 1/2. You define which physical port(s) you want to be part of that IRF port then once all your physical ports are associated with an IRF port, bring those ports back up.

Once the ports are ready, you can enable IRF on the switch using the command below. I will save the configuration and reboot it.

CONFIGURING THE SECOND SWITCH

Now this is the second switch therefore it needs to be renumbered as switches by default will be set as member 1. I can confirm this by using the 'display irf' command and then renumber the switch using the command shown below. You will need to save you changes then reboot the switch for the change to take affect.

After the reboot, I can check the IRF topology again to see it has now updated to be member 2.

Now, like before the switch will need it's ports setting up so shutdown the physical ports being used for stacking, set up the IRF port and associate the physical ports with that IRF port then bring them back up. Finally the IRF fabric is enabled to complete the setup of switch 2.

CONNECTING THE SWITCHES

With the configuration being set on each switch, all that is left is for the stacking cables to be connected. You need to push them in until they click into place and then the stack will automatically form. Some switches may reboot, typically the non-master switches will reboot. Once it is all back up, you can check the IRF topology and you should see both switches present.

Wednesday, February 17, 2021

Setting up Self-Service Password Reset (SSPR) in Microsoft Azure

This covers setting up the password reset service in Microsoft Azure.

ENABLING SELF-SERVICE PASSWORD RESET

Within Azure Active Directory, go to the Password Reset section.


Under the Properties section, you have a switch with the options None, Selected, All. This is for enabling self-service password resets. The All option will enable SSPR for all users in the tenant, None will disable the feature and Selected will enable it for a specific Security Group. (This can be good to only enable it for specific users)


SETTING YOUR OPTIONS FOR SSPR

You can now define your options for the self-service password reset process. This is defining how many authentication steps are required, and what authentication methods users can choose to use as shown below.


You can set Registration settings so users have to register first time they log in to define their MFA settings (Set security question answers, enter mobile phone number, etc) as well as how often they have to re-confirm their authentication details.


You can control the notification settings such as notifying a user when their password was reset as well as notifying all admins when another admin resets their password.

Tuesday, February 16, 2021

Creating Storage Pools in Windows Server

This covers setting up new disks to be part of a Storage Pool on Windows Server.

BRINGING THE DISKS ONLINE

Within Server Manager, the Disks section within Files and Storage Servers will show your new disks. You will need to bring them online by right-clicking and selecting 'Bring Online'.

CREATING THE STORAGE POOL

Once your disks are online, we can pool these disks together. Under Storage Pools in Files and Storage Services you can right-click in the storage pools area and select 'New Storage Pool'. This will open the Storage Pool wizard.

Within the wizard, set a name for the Storage Pool.

Now select the disks you wish to be part of the pool.

Once the wizard completes, you will want to tick the 'Create a virtual disk when the wizard closes' option to automatically open the virtual disk creation wizard.

CREATING A VIRTUAL DISK

Now we have the pool created, we need to create some virtual disks. If you have ticked the option in the last step you should see the virtual disk wizard. Follow through the wizard giving your disk a name, setting the desired options, etc. 

After creating your disk, the option to automatically open the volume creation wizard should already be selected.

CREATING VOLUMES ON THOSE DISKS

We have a storage pool and disks but we now need volumes on to make these functional. The volume creation wizard should have opened after the last step. Follow through the wizard selecting the desired disk, name the volume, select a drive letter and file system settings.

At the end you will be able to see the volumes you have created under Volumes on File and Storage Settings as shown below.

Under Disks on Storage Pools you will also be able to see the created disks.

Friday, February 12, 2021

Understanding Multi-Tiered Network Design

This covers designing networks with multiple tiers and redundancy.

DESIGNING A NETWORK

When designing a network, uptime, resiliency, redundancy, performance and manageability are the main things to design into the network. 

If you build a network with lots of redunancy then that will prevent a single point of failure causing a network outage. Redundancy allows the network to continue to run even if there is a link or network device down. The network can withstand outages while allowing end users to continue to work and give the network team time to resolve the issue. This resiliency will ensure the uptime is good within the network and lowers the impact of issues.

Redundancy can also improve performance. Multiple links between network devices can be logically made to act as a single link which allows traffic to load balance across the links for higher throughput. If a link does go down then spanning tree won't need adjust the tree and root ports, allowing traffic to continue to flow.

SIMPLE NETWORK DESIGN

Below is an example of a simple network. There's a router, layer 3 switch and layer 2 switches for end devices. Now this network can work, but if a link were to go down, or the layer 3 switch goes down then the network won't be able to carry on running.

TIERED NETWORK DESIGN

Now with redundancy then the network could look like below where the network is tiered. End devices connect to layer 2 switches at the access layer, if traffic needs to flow further through the network it can go up to the distribution layer then down to other layer 2 switch or up to the routers for internet access.

If a link or layer 3 switch goes down then the redunadant link can be used to avoid disrupting the flow of traffic and uptime within the network.

Segmenting the network like this makes it easier to manage. It will be easier to understand the flow of traffic and can be easier controlled with ACLs as well as adding in controlled segments like DMZs. 

FURTHER REDUNDANCY

You can go even further with stuff like clustering, the access layer switches can be clustered to logically behave as a single switch, same with the layer 3 switches clustered into a single logical switch. The routers can then use redundancy protocols such as HSRP, VRRP, GLBP to allow a router to failover if one goes down.

Understanding the DHCP Request Process

This covers the underlying process of clients requesting and receiving DHCP IP Addresses from DHCP servers.

DHCP MESSAGES

The DHCP process is made up of 4 messages between the client and the DHCP server.

  1. Discover - Client broadcasts out to see if there's a DHCP server.
  2. Offer - The DHCP Server responses with an IP address the client can have.
  3. Request - The end device requests to have the IP address
  4. Acknowledgement - The DHCP server acknowledges that the end device has that IP address now.

The messages from the client are broadcasts whereas the DHCP server responds using unicast.

What is Administrative Distance?

This covers administrative distance in terms of routing within computer networking.

THE NEED FOR ADMINISTRATIVE DISTANCE

When routing, there may be lots of possible best routes. Say you you were going to Network B, OSPF will determine the best route to there, EIGRP will determine a best route as well, IS-IS would too and maybe you have a static route to Network B. There's 4 different best paths to Network B but which one will be put into the routing table as the main route? This is what administrative distance handles.

Administrative Distance gives each routing method a score, this score rates how reliable and trustworth the route is. The lower the score the better. So if there's two routes to a network, one from OSPF and one from a static route, then the administrative distance score will be used to decide which route to use.

The list below shows the administrative distance scores for different routing protocols and methods of routing traffic. So in the case of OSPF vs a static route, the static route is trusted more and will be used over OSPF.

Route Protcol / Source Administrative Distance Score
Connected Devices 0
Static Routes 1
EIGRP Summary Route 5
External BGP 20
EIGRP 90
OSPF 110
IS-IS 115
RIP 120
External EIGRP 170
Internal BGP 200

What's the Difference Between Collision Domains & Broadcast Domains?

This covers the difference between a collision domain and broadcast domain in terms of computer networking.

WHAT IS A COLLISION DOMAIN?

Collision domains describe the area in a network where datagrams can collide. The number of collision domains depends on equipment. 

The image below shows a 4-port hub and 4-port switch. The hub and connecting devices are all a single collision domain, why? Because a hub forwards datagrams out of all interfaces therefore there's the possiblity of a collision on all interfaces whenever traffic is transmitted.

The switch in the image has 4 collisions domain, why? Switches are more intelligent, they make forwarding desicions based on the MAC address table therefore each each link between an end device and the switch is an isolated collision domain. There's 4 links therefore 4 collision domains.

WHAT IS A BROADCAST DOMAIN?

Broadcast domains describe how far a datagram broadcasted out can reach. If an end device broadcasts a datagram out, it will be forwarded out to the whole subnet. It doesn't pass routers or pass between subnets so a broadcast domain is contained in to it's own subnet. 

(You can forward broadcasts between subnets such as using DHCP Relays to forward a DHCP Request, which is a broadcast, between subnets)

The topology below shows two layer 2 networks separated by a router. The router is the barrier for the broadcasts so each side of the router is a broadcast domain.

VLANs can be used to minimise broadcast domains. If half of your switch is for VLAN 10 and the other half is set as VLAN 20, then broadcasts for VLAN 10 will only be forwarded to half of that switch meaning each VLAN will be it's own broadcast domain.