Thursday, December 30, 2021

Understanding OSPF LSA Types

This covers the different LSA types in OSPF and how they behave in OSPF AS.

PURPOSE OF LINK-STATE ADVERTISEMENTS (LSAs) IN OSPF

OSPF's LSDB is built using LSAs advertised by routers within the OSPF AS. These LSAs are complied in the LSDB with a LSID to identify them. The OSPF SPF algorithm uses these LSAs to calculate paths to destinations. There are different LSAs for different aspects of OSPF from internal area information to inter area (IA) routes shared by ABRs between areas to external routes through ASBRs running redistribution.

LSA TYPE 1 - ROUTER LSA

Type 1 LSAs are generated by a router and will contain information on all its connected links which are OSPF enabled. This information consists of:
  • For non-DR elected interfaces, lists the router's interface subnet number/mask and interface OSPF cost.
  • For interfaces with an elected DR, it lists the IP address of the DR with a notation that the interfaces connects to a transit network.
Every router in an area will create a single an Type 1 LSA for themselves and flood it through the same area it is in. ABRs on the other hand create multiple Type 1 LSAs, an LSA for each area they are in and flood it to the the areas the LSAs describe. As the ABR as one RID, that will be the same on each LSA.

The LSID for Type 1 LSAs is the RID of the advertising router and are listed as "O" in the routing table.

LSA TYPE 2 - NETWORK LSA

Type 2 LSAs are used on Multiaccess (MA) networks where there are multiple adjacencies over a shared broadcast domain. Once the DR has been elected, it will then create Type 2 LSAs for the subnet. OSPF cannot represent multiple routers on a single subnet using a link connection to all four routers. OSPF defines a Type 2 network LSA as a transit network. 

Each router's Type 1 LSA will describe a connection to this transit network which is then molded into a Type 2 LSA by the DR and uses it's interface IP as the LSID to show it is a Type 2 LSA. This is flooded within the local area as Type 1 LSAs are.

These are listed as "O" routes in the routing table.

LSA TYPE 3 - SUMMARY LSA

ABRs don't simply forward all Type 1 and 2 LSAs to other areas, that wpuld increase the complexity of the SPF algorithm as well as take up more memory. Instead, the ABR will summaries the subnets of an area and forward that out to simply say subnet XYZ exists in Area 2. 

These LSAs aren't detailed like Type 1 and 2 LSAs. The ABR will use the subnet's address as the LSID and include its own RID so others know which ABR advertised the route. Although it is called a Summary LSA, it doesn't summaries the subnets but the term simply means it isn't as detailed.

These are listed as "OIA" routes in the routing table.

LSA TYPE 4 - SUMMARY ASBR LSA

These are only created when an ABR recieves a Type 5 LSA from an ASBR. The LSA will bascially list the RID of a ASBR and the cost to get to it from said ABR. This is flooded out to all connected areas. The LSID used for these LSAs is the ASBR's RID.

The importance of these is when there are multiple possible paths to subnets external to the OSPF AS to act as tie breakers for routing for Type 2 External Routes.

LSA TYPE 5 - EXTERNAL LSA

Routes from an external AS are flooded through the OSPF AS through the ASBR. These LSAs are flooded to all none stubby areas, stubby areas will have to use default routes as they don't allow Type 5 LSAs in the area. The ASBR will generate a Type 5 LSA for each subnet including the following information.
  • LSID which is the subnet IP address
  • Mask which is the subnet mask
  • Advertising Router which is the RID of the ASBR
  • Metric which is set by the ASBR
  • External Metric Type which can either be Type 1 or Type 2
There are two metric types used, External Type 1 or External Type 2. These are displayed in the routing table as either "E1" or "E2". 
  • External Type 2 routes will just use the metric set by the ASBR, if there is multiple paths then having the same metric means it will then use the Type 4 LSA to calcualte which has the better cost. This is basically the cost of the ABR to the ASBR added to the existing metric.
  • External Type 1 routes will use the metric set by the ASBR but then add on the internal costs. 

LSA TYPE 6 - MULTICAST LSA

These are used when routing using multicast through the MODPF routing protocol and this LSA isn't supported on Cisco products.

LSA TYPE 7 - EXTERNAL LSA

This is basically the same as a Type 5 LSA. As Type 5 LSAs cannot enter stubby area, the Type 7 LSA was developed to get round this if an ASBR is found within a stuby area. The Type 7 LSA is flooded within the stubby area but as it reaches the ABR, it is then converted to a Type 5 LSA and flooded through the normal areas.

The External Route types here are the same but in the routing table will use "N1" or "N2" rather than the "E1" and "E2" that Type 5 LSAs use.

LSA TYPE 8 - EXTERNAL ATTRIBUTES LSA FOR BGP

This LSA is for use with the Border Gateway Protocol (BGP) 


Sunday, October 3, 2021

Setting Up Autopilot in Endpoint Manager

This guide covers creating Autopilot deployment profiles, Azure AD groups and importing Windows devices into Endpoint Manager for Autopilot configuration.

HOW AUTOPILOT WORKS

Autopilot will automate the process of enrolling a device into Endpoint Manager, deploying any security policies, installing your desired apps and setting up the device configuration, all within the intial setup period. User log in with their Azure AD credentials to access their enterprise data and resources.

Autopilot uses deployment profiles, these let you detail how the device is enroll whether it's self-deploying (Enrolls without needing a user to log in) or user driven (User logs in before it enrolls). 

These deployment profiles need to be linked to the devices you want it to apply to, this is where the Azure AD group comes into play. Once you have imported your Windows devices, you will need to add them to the group that is associated with your deployment profile. When you boot up the device, it will realise it is has an Autopilot profile assigned and then begins to follow that during the OOBE.

CREATING A GROUP FOR YOUR DEVICES

In Azure AD, create a security group which will be used by Endpoint Manager for your deployment profile. Later on when we import the device, we can add it to the Azure AD group.

This group can be used for targeting your configuration policies, app deployments, conditional access, etc.

CREATING AN AUTOPILOT DEPLOYMENT PROFILE

Under the Windows enrollment section, there is an option named 'Deployment Profiles' where you can create your profile. Simply create a new profile and follow through the wizard. 



You will get the option of User Driven or Self-Deploying.

  • User Driven - The device is associated with a user, during the OOBE the user needs to log in with their Azure AD credentials. Once they log in, it will enroll into Endpoint and apply the security configurations, install the user's applications and setup it up ready for them to use.
  • Self-Deploying - The device will enroll into Endpoint but won't require a user to log in. It will then display the Windows log in screen and during the first log in it will apply the security and device configuration.

I use the self-deploying profile in production but it's a matter of what better fits your enviornment. 

Follow through the rest of the setup, selecting your desired options, setting the default language and device name. Device name can use the value %SERIAL% to use the serial in the name or %RAND:4% for random numbers with the number representing how many random numbers (4 in this case)

Finally under the 'Included' groups, select your Azure AD group created for your devices.



IMPORTING YOUR WINDOWS DEVICE INTO ENDPOINT MANAGER

You can automate this by having your vendor provide you with the Hardware IDs of your new laptops which you can import into Endpoint so they can be unbox on delivery and will go striaght into the Autopilot process and enroll.

In this example, I will show you how to manually get the Hardware IDs and import the device into Intune.

You need to boot up the laptop and open Command Prompt. If you are in the OOBE then press Shift+F10 to open it up. Once open, enter 'PowerShell' to start PowerShell. Enter the command below.

New-Item -Type Directory -Path "C:\HWID"

Set-Location -Path "C:\HWID"

Set-ExecutionPolicy -Scope Process -ExecutionPolicy RemoteSigned

Install-Script -Name Get-WindowsAutoPilotInfo

Get-WindowsAutoPilotInfo -OutputFile AutoPilotHWID.csv

This will generate a CSV in the C:\HWID directory. Run the command 'explorer.exe' to open Windows Explorer and copy this CSV to a pendrive. Back in Endpoint under 'Windows Enrollment > Autopilot > Devices' you want to select the 'Import' option then upload your CSV. 

Once it has imported, you will need to add it to your Azure AD group. Under the Autopilot Devices menu, your device will get an Assigned status under the 'Profile Status' column which means the Autopilot Deployment Profile has successfully been assigned to that device.


Now you just need to reboot the device, go back into the OOBE and it should start the Autopilot process.

Monday, March 1, 2021

Creating an Aruba IAP Cluster and SSID

This uses the Aruba 315 model IAPs.

BOOTING THE ACCESS POINTS

You need to power up the access points and ensure they can get an IP address via DHCP, you can manually specify one if you have the console cable. 

If you push down the reset button as it powers up, it should reset the access point. You will see if this has happened when the lights flash and flash for a second time 10 seconds later. Let the access point boot up.

CONNECTING TO THE SETMEUP SSID

Once the AP has booted up, got an IP address and is broadcasting the SetMeUp SSID you can begin to configure the access point. You should see a network named SetMeUp-XX:XX:XX which is the Aruba AP.

CONNECTING TO THE ADMIN PORTAL

Once you are connected you can access the admin portal via your web browser entering the IP address of the access point or using the URL 'setmeup.arubanetworks.com' to access the web portal. The username will be admin and the password is the serial number on the back of the access point.

USING THE PORTAL

Under Configuration you can change the controller's name, change network and access point settings, etc. Below you can see me setting the name of the IAP controller to JakeOnSysadmin.

CREATING AN SSID

To create a new SSID, you need to go to Configuration then Networks which will show the existing SSIDs. Click on the Plus sign to create a new SSID.

Go through the wizard setting the name, VLAN settings, security settings, etc.


Once finished you should now see the SSID broadcasting.

CLUSTERING THE ACCESS POINTS

If you boot up another access point within the same VLAN, it should cluster with the existing access point and broadcast the same SSIDs to extend your wireless coverage.


Setting Up DHCP on a HPE 5130 Switch

This covers setting up a HPE 5130 switch as your DHCP server.

ENABLING DHCP GLOBALLY

First you need to enable DHCP globally on the switch. You do so with the command below.


CREATE A VLAN AND VLAN INTERFACE

Now we need to create a VLAN and a VLAN interface which will be the gateway for the VLAN. Below are the commands used to create the VLAN and it's interface along with an IP address.


CREATING THE POOL

With the VLAN created, the pool needs to be created. You need to define the network address and mask as well as the range of addresses you can get via DHCP. The commands are shown below.


ASSIGN A DHCP POOL TO A VLAN INTERFACE

The newly created DHCP pool now needs to be assigned to the VLAN interface. This is done with the command shown below.


TESTING THE DHCP SERVER

That should be all, just to confirm I have plugged in an access point to see if it gets an IP address.


The command 'display dhcp server ip-in-use' will show the leases, as we can see there's a lease which shows DHCP is working.



Saturday, February 27, 2021

Stacking HPE 5130 Switches

The switches used in this example are HP 5130s.

WHAT ARE STACKED SWITCHES?

Stacked switches are multiple switches connected together in a cluster to appear as a single logical device. The benefits to this is ease of management with less IP addresses to document as well as redundancy as you can have multiple uplinks so if a link does go down, clients can go over the additional uplinks.

You can even use link aggregation on the uplinks to simplify spanning-tree topologies as well as load balance traffic up to the distribution layer switches.

MY LAB

Below is an example of the lab I am using for this. It's two HPE 5130s with stacking cables connected to the Ten-GigabitEthernet ports. The image below is of the stack completed.

The cables are connected last, once all the configuration is in place you then slot in the stacking cables.

CONFIGURING THE FIRST SWITCH

The first switch is the top one, it will need to be numbered to represent it's position in the stack (as it's the top switch it will be switch 1). You can use the command 'display irf' to show the IRF topology. AS the screenshot below shows, switch 1 is already set as Member 1.

Now that it is numbered correctly, the stacking ports need configuring. First you need to shutdown the ports you will be using, in this case Ten-Gig 1/0/49 and Ten-Gig 1/0/50, then assign them to an IRF port.

You create the IRF ports using the command 'irf-port <switch-num>/<port-num>' so I will be using irf-port 1/1 and 1/2. You define which physical port(s) you want to be part of that IRF port then once all your physical ports are associated with an IRF port, bring those ports back up.

Once the ports are ready, you can enable IRF on the switch using the command below. I will save the configuration and reboot it.

CONFIGURING THE SECOND SWITCH

Now this is the second switch therefore it needs to be renumbered as switches by default will be set as member 1. I can confirm this by using the 'display irf' command and then renumber the switch using the command shown below. You will need to save you changes then reboot the switch for the change to take affect.

After the reboot, I can check the IRF topology again to see it has now updated to be member 2.

Now, like before the switch will need it's ports setting up so shutdown the physical ports being used for stacking, set up the IRF port and associate the physical ports with that IRF port then bring them back up. Finally the IRF fabric is enabled to complete the setup of switch 2.

CONNECTING THE SWITCHES

With the configuration being set on each switch, all that is left is for the stacking cables to be connected. You need to push them in until they click into place and then the stack will automatically form. Some switches may reboot, typically the non-master switches will reboot. Once it is all back up, you can check the IRF topology and you should see both switches present.

Wednesday, February 17, 2021

Setting up Self-Service Password Reset (SSPR) in Microsoft Azure

This covers setting up the password reset service in Microsoft Azure.

ENABLING SELF-SERVICE PASSWORD RESET

Within Azure Active Directory, go to the Password Reset section.


Under the Properties section, you have a switch with the options None, Selected, All. This is for enabling self-service password resets. The All option will enable SSPR for all users in the tenant, None will disable the feature and Selected will enable it for a specific Security Group. (This can be good to only enable it for specific users)


SETTING YOUR OPTIONS FOR SSPR

You can now define your options for the self-service password reset process. This is defining how many authentication steps are required, and what authentication methods users can choose to use as shown below.


You can set Registration settings so users have to register first time they log in to define their MFA settings (Set security question answers, enter mobile phone number, etc) as well as how often they have to re-confirm their authentication details.


You can control the notification settings such as notifying a user when their password was reset as well as notifying all admins when another admin resets their password.

Tuesday, February 16, 2021

Creating Storage Pools in Windows Server

This covers setting up new disks to be part of a Storage Pool on Windows Server.

BRINGING THE DISKS ONLINE

Within Server Manager, the Disks section within Files and Storage Servers will show your new disks. You will need to bring them online by right-clicking and selecting 'Bring Online'.

CREATING THE STORAGE POOL

Once your disks are online, we can pool these disks together. Under Storage Pools in Files and Storage Services you can right-click in the storage pools area and select 'New Storage Pool'. This will open the Storage Pool wizard.

Within the wizard, set a name for the Storage Pool.

Now select the disks you wish to be part of the pool.

Once the wizard completes, you will want to tick the 'Create a virtual disk when the wizard closes' option to automatically open the virtual disk creation wizard.

CREATING A VIRTUAL DISK

Now we have the pool created, we need to create some virtual disks. If you have ticked the option in the last step you should see the virtual disk wizard. Follow through the wizard giving your disk a name, setting the desired options, etc. 

After creating your disk, the option to automatically open the volume creation wizard should already be selected.

CREATING VOLUMES ON THOSE DISKS

We have a storage pool and disks but we now need volumes on to make these functional. The volume creation wizard should have opened after the last step. Follow through the wizard selecting the desired disk, name the volume, select a drive letter and file system settings.

At the end you will be able to see the volumes you have created under Volumes on File and Storage Settings as shown below.

Under Disks on Storage Pools you will also be able to see the created disks.

Friday, February 12, 2021

Understanding Multi-Tiered Network Design

This covers designing networks with multiple tiers and redundancy.

DESIGNING A NETWORK

When designing a network, uptime, resiliency, redundancy, performance and manageability are the main things to design into the network. 

If you build a network with lots of redunancy then that will prevent a single point of failure causing a network outage. Redundancy allows the network to continue to run even if there is a link or network device down. The network can withstand outages while allowing end users to continue to work and give the network team time to resolve the issue. This resiliency will ensure the uptime is good within the network and lowers the impact of issues.

Redundancy can also improve performance. Multiple links between network devices can be logically made to act as a single link which allows traffic to load balance across the links for higher throughput. If a link does go down then spanning tree won't need adjust the tree and root ports, allowing traffic to continue to flow.

SIMPLE NETWORK DESIGN

Below is an example of a simple network. There's a router, layer 3 switch and layer 2 switches for end devices. Now this network can work, but if a link were to go down, or the layer 3 switch goes down then the network won't be able to carry on running.

TIERED NETWORK DESIGN

Now with redundancy then the network could look like below where the network is tiered. End devices connect to layer 2 switches at the access layer, if traffic needs to flow further through the network it can go up to the distribution layer then down to other layer 2 switch or up to the routers for internet access.

If a link or layer 3 switch goes down then the redunadant link can be used to avoid disrupting the flow of traffic and uptime within the network.

Segmenting the network like this makes it easier to manage. It will be easier to understand the flow of traffic and can be easier controlled with ACLs as well as adding in controlled segments like DMZs. 

FURTHER REDUNDANCY

You can go even further with stuff like clustering, the access layer switches can be clustered to logically behave as a single switch, same with the layer 3 switches clustered into a single logical switch. The routers can then use redundancy protocols such as HSRP, VRRP, GLBP to allow a router to failover if one goes down.

Understanding the DHCP Request Process

This covers the underlying process of clients requesting and receiving DHCP IP Addresses from DHCP servers.

DHCP MESSAGES

The DHCP process is made up of 4 messages between the client and the DHCP server.

  1. Discover - Client broadcasts out to see if there's a DHCP server.
  2. Offer - The DHCP Server responses with an IP address the client can have.
  3. Request - The end device requests to have the IP address
  4. Acknowledgement - The DHCP server acknowledges that the end device has that IP address now.

The messages from the client are broadcasts whereas the DHCP server responds using unicast.

What is Administrative Distance?

This covers administrative distance in terms of routing within computer networking.

THE NEED FOR ADMINISTRATIVE DISTANCE

When routing, there may be lots of possible best routes. Say you you were going to Network B, OSPF will determine the best route to there, EIGRP will determine a best route as well, IS-IS would too and maybe you have a static route to Network B. There's 4 different best paths to Network B but which one will be put into the routing table as the main route? This is what administrative distance handles.

Administrative Distance gives each routing method a score, this score rates how reliable and trustworth the route is. The lower the score the better. So if there's two routes to a network, one from OSPF and one from a static route, then the administrative distance score will be used to decide which route to use.

The list below shows the administrative distance scores for different routing protocols and methods of routing traffic. So in the case of OSPF vs a static route, the static route is trusted more and will be used over OSPF.

Route Protcol / Source Administrative Distance Score
Connected Devices 0
Static Routes 1
EIGRP Summary Route 5
External BGP 20
EIGRP 90
OSPF 110
IS-IS 115
RIP 120
External EIGRP 170
Internal BGP 200

What's the Difference Between Collision Domains & Broadcast Domains?

This covers the difference between a collision domain and broadcast domain in terms of computer networking.

WHAT IS A COLLISION DOMAIN?

Collision domains describe the area in a network where datagrams can collide. The number of collision domains depends on equipment. 

The image below shows a 4-port hub and 4-port switch. The hub and connecting devices are all a single collision domain, why? Because a hub forwards datagrams out of all interfaces therefore there's the possiblity of a collision on all interfaces whenever traffic is transmitted.

The switch in the image has 4 collisions domain, why? Switches are more intelligent, they make forwarding desicions based on the MAC address table therefore each each link between an end device and the switch is an isolated collision domain. There's 4 links therefore 4 collision domains.

WHAT IS A BROADCAST DOMAIN?

Broadcast domains describe how far a datagram broadcasted out can reach. If an end device broadcasts a datagram out, it will be forwarded out to the whole subnet. It doesn't pass routers or pass between subnets so a broadcast domain is contained in to it's own subnet. 

(You can forward broadcasts between subnets such as using DHCP Relays to forward a DHCP Request, which is a broadcast, between subnets)

The topology below shows two layer 2 networks separated by a router. The router is the barrier for the broadcasts so each side of the router is a broadcast domain.

VLANs can be used to minimise broadcast domains. If half of your switch is for VLAN 10 and the other half is set as VLAN 20, then broadcasts for VLAN 10 will only be forwarded to half of that switch meaning each VLAN will be it's own broadcast domain.