Difference between revisions of "Configure NSX-T 3.0 from scratch with edge cluster and tier gateways"

From Notes_Wiki
(Created page with "<yambe:breadcrumb self="Configure NSX-T 3.0 from scratch with edge cluster and tier gateways">VMWare NSX|VMWare NSX</yambe:breadcrumb> =Configure NSX-T 3.0 from scratch with e...")
 
m
Line 226: Line 226:
  no ip address
  no ip address
  negotiation auto
  negotiation auto
mtu 9216
!
!
interface GigabitEthernet2.1
interface GigabitEthernet2.1
Line 305: Line 306:
!
!
end
end
</pre>
Note:
* Without "mtu 9216" on GigabitEthernet 2 the ping might work but any data transfer eg running command with large output after ssh or opening https:// site will not work.


</pre>


Here:
Here:

Revision as of 07:49, 28 April 2021

<yambe:breadcrumb self="Configure NSX-T 3.0 from scratch with edge cluster and tier gateways">VMWare NSX|VMWare NSX</yambe:breadcrumb>

Configure NSX-T 3.0 from scratch with edge cluster and tier gateways

As such NSX configuration is very specific to individual needs and having a fixed set of steps is not practical. But for lab / learning purposes one can go through below steps to get a working NSX-T 3.0 setup with edge cluster and Tier gateways (Both T0, T1) which talk to ToR using BGP.

Pre-requisites

  1. Following values / pre-requisites are required before we can go ahead with the NSX-T 3.0 setup with BGP peering with ToR switches:
    • NTP for the setup
    • DNS to resolve various FQDN for the setup
    • NSX-T, vCenter, ESXi licenses
    • ESXi hosts IP, FQDN, Management VLAN, vMotion VLAN, Shared storage/vSAN
    • vCenter IP, FQDN, VLAN
    • NSX-T manager management IP
    • VLAN for Host VTEP communication - Needs DHCP
    • VLAN for Edge VTEP communication
    • VLAN for left ToR Edge Uplink (North-South)
    • VLAN for right ToR Edge Uplink (North-South)
    • Two IPs for edge in management VLAN with DNS entries
    • Two IPs per edge (4 for 2 edges) in edge VTEP communication VLAN
    • Two IPs per edge (one in left ToR Edge uplink VLAN and other in right ToR Edge uplink VLAN)
    • BGP configuration on both left and right ToR in respective uplink VLANs


Example values used for below steps

  1. For example in below steps we are going to use below values
    NTP
    time.google.com
    DNS
    10.1.1.2 #A bind DNS specifically setup for rnd.com for lab experiments with 10.1.1.2 IP address Refer Configuring basic DNS service with bind. This DNS has named.conf and rnd.com.forward files as listed in below article.
    ESXi hosts IPs
    10.1.1.11 to 10.1.1.14
    ESXi hosts FQDN
    esxi01.rnd.com to esxi04.rnd.com
    Management VLAN
    201 - 10.1.1.0/24
    vMotion VLAN
    202 - 10.1.2.0/24. ESXi hosts will use IPs 10.1.2.11 to 10.1.2.14 in this VLAN for vMotion.
    Shared storage
    Created NFS at 10.1.1.41 using CentOS 7.x Cloudstack 4.11 Setup NFS for secondary and primary (if no central storage) storage
    vCenter Details
    IP: 10.1.1.3, FQDN: vcenter.rnd.com, VLAN:201 Management VLAN
    NSX-T maanger management IP
    10.1.1.4, FQDN: nsxt.rnd.com
    VLAN for Host VTEP communication
    VLAN 203 - 10.1.3.0/24. For this we can setup DHCP by adding a Linux machine with DHCP service to the network. Or we can setup DHCP service on ToR switches. Refer: Configure DHCP server on Cisco router
    VLAN for Edge VTEP communication
    VLAN 204 - 10.1.4.0/24
    VLAN for left ToR Edge Uplink (North-South)
    VLAN 205 - 10.1.5.0/24
    VLAN for right ToR Edge Uplink (North-South)
    VLAN 206 - 10.1.6.0/24
    Two IPs for edge in management VLAN with DNS entries
    edge01.rnd.com ( 10.1.1.21) and edge02.rnd.com (10.1.1.22)
    Two IPs per edge (4 for 2 edges) in edge VTEP communication VLAN
    10.1.4.11/24 to 10.1.4.14/24
    Two IPs per edge (one in left ToR Edge uplink VLAN and other in right ToR Edge uplink VLAN)
    10.1.5.2/24, 10.1.5.3/24, 10.1.6.2/24 and 10.1.6.3/24
    BGP configuration on both left and right ToR in respective uplink VLANs
    For this please refer to sample single CSRV1000 router configuration given below.


Setup ESXi, NFS, DNS and vCenter

  1. After this setup four ESXi hosts (esxi01.rnd.com to esxi04.rnd.com) for nested setup using all trunk VLAN ports with management IP in VLAN 201 - 10.1.1.11 to 10.1.1.14. Note point about VMkernel MAC address at Install nested ESXi on top of ESXi in a VM
  2. Create DNS machine with IP 10.1.1.2 connected to VLAN 201 for DNS resolution. Refer Configuring basic DNS service with bind.
  3. Create NFS machine with IP 10.1.1.41 for shared storage among all four ESXi hosts. Refer CentOS 7.x Cloudstack 4.11 Setup NFS for secondary and primary (if no central storage) storage
  4. Change "VM Network" VLAN to 201 in all four ESXi hosts. This will help in reachbility to vCenter when it is deployed on 'VM Network' later.
  5. Add NFS datastore from 10.1.1.41 to esxi01.rnd.com so that we can deploy vCenter on this shared NFS datastore
  6. Create a few management stations 10.1.1.31 (Linux) and 10.1.1.32 (Windows) in VLAN 201 for nested lab operations. For both these machines we need an interface in normal LAN (eg 172.31.1.0/24 suggested here) without gateway for connecting to these admin machines from LAN machines in 172.31.1.0/24 subnet. Gateway for these management stations should be 10.1.1.1 so that these machines can access all subnets via L3 switch / router.
  7. Deploy vCenter using a Windows management station at IP 10.1.1.3 / FQDN-vcenter.rnd.com on top of nested ESXi (esxi01.rnd.com) using NFS datastore and VM network.
  8. Once vCenter for nested lab experiments is available do the following:
    1. Add all four ESXi hosts among two clusters - Computer cluster (esxi01, esxi02) and Edge cluster (esxi03, esxi04)
    2. Add ESXi and vCenter licenses to vCenter. Assign these licenses.
    3. Create distributed switch using two ports out of four of all four nested ESXi VMs.
      1. Create distributed port group for management using VLAN 201
      2. Migrate vCenter and vmk0 kernel port to this distributed switch.
      3. Later migrate the remaining two uplinks also to distributed switch.
      4. Delete standard switch from all four hosts
    4. Configure MTU of at least 1700 on this distributed switch in nested LAB. The MTU on external ESXi host switch hosting these nested ESXi VMs should be higher (ideally 9000+). If multiple ESXi hosts are used for lab setup then these hosts should have VLAN connectivity for all required VLANs (including 201-206 listed above) and MTU 9000+
    5. Create distributed port-group for ALL-VLAN-TRUNK on distributed switch using "VLAN Trunnking" and 0-4096 options
    6. Create distributed port-group for vMotion (VLAN 202) and add VMKernel ports (10.1.2.11 to 10.1.2.14) on all four ESXi hosts for vMotion
    7. Mount NFS datastore from 10.1.1.41 mounted on ESXi on other 3 hosts.


Setup NSX-T Manager and Edge cluster

  1. Deploy NSX-T manager using IP 10.1.1.4 - FQDN nsxt.rnd.com on maangement port group from admin stations (10.1.1.31 or 10.1.1.32). This NSX manager can be deployed on top of esxi02.rnd.com
    Dont choose size as extra small. That is for different purpose altogether.
    EnableS SSH to NSX-T manager
  2. Add NSX-T license to NSX-T manager and add go to System -> Fabric -> Computer manager and integrated NSX manager with vCenter.
  3. Go to System -> Fabric -> Transport Zones and create Transport zone one for edge communication using Traffic Type VLAN (edge-vlan-tz) and one for management communication using Traffic Type: overlays (mg-overlay-tz)
  4. Go to System -> Fabric -> Profiles. Create uplink profile with load-balance source with 4 active uplinks (u1, u2, u3, u4) in VLAN 203 for host-VTEP communication
  5. Go to System -> Fabric -> Nodes -> Host Transport Nodes. Select managed by vcenter.rnd.com. Configure NSX on computer cluster hosts (esxi01 and esxi02) using management overlay (mg-overlay-tz) transport zone and uplink profile with 4 uplinks in VLAN 203 (host-VTEP) with DHCP based IPs. Assign all four uplinks of Existing vDS to this host. No need to go for N-VDS or enhanced data path
  6. Go to System -> Fabric -> Nodes -> Edge Clusters. Create Edge cluster with name edgecluster01
  7. Go to System -> Fabric -> Profiles. Create uplink profile for edge with VLAN 204 (Edge vtep) with two uplinks in load-balance source
  8. Go to Networking -> Segments and create segments for edge uplink. left-uplink segment and for transport zone choose edge-vlan-tz. Specify VLAN as 205 for left-uplink-vlan.
    Similarly crete right-uplink segment with edge-vlan-tz and VLAN ID of 206.
  9. Go to System -> Fabric -> Nodes -> Edge Transport Nodes. Create edge (edge01) with management IP 10.1.1.21 - FQDN edge01.rnd.com with both uplink as ALL-VLANS-Trunk-DPG. Use the edge uplink profile created in previous steps. Use 10.1.4.11 and 10.1.4.12 as VTEP IPs for Edge in 204 VLAN. Both mg-overlay-tz and edge-vlan-tz transport zones should be selected for edge.
    Create this without resource (memory) reservation. Use edge cluster created above for deploying this.
  10. Similarly created edge (edge02) with management IP 10.1.1.22 - FQDN edge02.rnd.com with both uplinks as ALL-VLANS-Trunk-DPG. Use the edge uplink profile created in previous steps. Use 10.1.4.13 and 10.1.4.14 as VTEP IPs for Edge in 204 VLAN.
  11. Once both edges are ready add them to edgecluster
  12. SSH to edge as admin and run "get logical-routers". Go to first vrf 0 and check interfaces. "get interfaces". Try to ping gateway 10.1.4.1 to validate edge VTEP connectivity to VTEP VLAN gateway.


Setup T0 and T1 gateway routers with overlay segments

  1. Go to Networking -> T0-Gateways
  2. Create t0-gw as active-standby on edgecluster01. Click save
  3. Add interfaces edge01-left-uplink with IP 10.1.5.2/24 connected to left-uplink segment. Select edg01 as edge node.
    Similarly create interface edge01-right-uplink with IP 10.1.6.2/24 connected to right-uplink segment. Select edge01 as edge node.
  4. Add interface edge02-left-uplink with IP 10.1.5.3/24 connected to left-uplink segment. Select edge02 as edge node.
    Similarly create interface edge02-right-uplink with IP 10.1.6.3/24 connected to right-uplink segement. Select edge02 as right node.
  5. Go to BGP and configure AS 65000. Enable BGP.
  6. Add BGP neighbor 10.1.5.1 with remote as 65000 with source IPs 10.1.5.2 and 10.1.5.3
    Similarly add BGP neighbor 10.1.6.1 with remote as 65000 with source IPs 10.1.6.2 and 10.1.6.3
  7. Add route-redistribution for all-routes. Select at least NAT-IP and "Connected Interfaces and Segments (including subtree)" under T0 gateway
    Select "Connected Interfaces and segments" and NAT-IP for Tier-1 gateways
  8. SSH to edge and do "get logical-routers". There should be service router for T0-gateway at vrf 1. Enter vrf 1
    Validate bgp connection to ToR have worked using "get bgp neighbor summary"
    Look at bgp routes using "get bgp"
  9. Go to Networking -> T1-Gateways
  10. Add tier1 gateway such as t1-prod with connectivity to t0-gw created before. Select edge cluster as edgecluster1. By selecting edge cluster we get service router for the t1-gateway. This is useful for NAT / L2 bridging etc. Click save.
    Go to router advertizements and advertize routes for "All connected segments and service ports"
  11. Go to Networking -> Sgements. Add Segment such as app-seg with connnectiity to t1-prod and transport zone as mg-overlay-tz. Give subnet as 10.1.7.1/24. This will become default-gw for this overlay in ditributed router.
    Add another segment web-seg with connectivity to t1-prod and transport zone as mg-overlay-tz. Give subnet as 10.1.8.1/24.
  12. Now these two new overlay segments 10.1.7.0/24 and 10.1.8.0/24 should be reachable from admin stations 10.1.1.31 and 10.1.1.32. We should see their related routing on service router for t0-gateway or ToR



Input files to help with test setup

rnd.com zone in named.conf for dns.rnd.com

Only rnd.com zone related lines of named.conf are captured below:

zone "rnd.com" IN {
        type master;
        file "rnd.com.forward";
};

For full steps refer Configuring basic DNS service with bind


rnd.com.forward for dns.rnd.com

rnd.com.forward file for rnd.com zone is:

$TTL 3600
@ SOA ns.rnd.com. root.rnd.com. (1 15m 5m 30d 1h)
	NS dns.rnd.com.
	A 10.1.1.2
l3switch	IN	A	10.1.1.1
dns             IN      A       10.1.1.2
vcenter         IN      A       10.1.1.3
nsxt            IN      A       10.1.1.4
nsxt1           IN      A       10.1.1.5
nsxt2           IN      A       10.1.1.6
nsxt3           IN      A       10.1.1.7
esxi01		IN	A	10.1.1.11
esxi02		IN	A	10.1.1.12
esxi03		IN	A	10.1.1.13
esxi04		IN	A	10.1.1.14
edge01		IN	A	10.1.1.21
edge02		IN	A	10.1.1.22
admin-machine	IN	A	10.1.1.31
admin-machine-windows	IN	A	10.1.1.32
nfs		IN	A	10.1.1.41


Sample CSRV1000 configuration with single router

Ideally there should be two ToR for redundancy. But in lab we can use a single router. A single router can also act as "router on a stick" for doing inter-VLAN routing among VLANs 201-206 listed above. Also router can act as DHCP for VLAN 203 (Host-VTEP). Sample CSRV1000 which does all this and also does NAT for management (10.1.1.0/24) via external interface for Internet access is:

version 15.4
service timestamps debug datetime msec
service timestamps log datetime msec
no platform punt-keepalive disable-kernel-core
platform console virtual
!
hostname l3-switch
!
boot-start-marker
boot-end-marker
!
!
enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1
!
no aaa new-model
!
!
!
!
!
!
!


!
ip dhcp pool host-overlay
 network 10.1.3.0 255.255.255.0
 domain-name rnd.com
 dns-server 10.1.1.2 
 default-router 10.1.3.1 
 lease 7
!
!
!
!
!
!
!
!
!
!
subscriber templating
multilink bundle-name authenticated
!
!
license udi pid CSR1000V sn 91E1JKLD4F3
!         
username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0
!
redundancy
 mode none
!
!
!
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip scp server enable
!
!
!
!
interface VirtualPortGroup0
 ip unnumbered GigabitEthernet1
!
interface GigabitEthernet1
 ip address 172.31.1.173 255.255.255.0
 ip nat outside
 negotiation auto
!
interface GigabitEthernet2
 no ip address
 negotiation auto
 mtu 9216
!
interface GigabitEthernet2.1
 encapsulation dot1Q 201
 ip address 10.1.1.1 255.255.255.0
 ip nat inside
!         
interface GigabitEthernet2.2
 encapsulation dot1Q 202
 ip address 10.1.2.1 255.255.255.0
!
interface GigabitEthernet2.3
 encapsulation dot1Q 203
 ip address 10.1.3.1 255.255.255.0
!
interface GigabitEthernet2.4
 encapsulation dot1Q 204
 ip address 10.1.4.1 255.255.255.0
!
interface GigabitEthernet2.5
 encapsulation dot1Q 205
 ip address 10.1.5.1 255.255.255.0
!
interface GigabitEthernet2.6
 encapsulation dot1Q 206
 ip address 10.1.6.1 255.255.255.0
!
interface GigabitEthernet3
 no ip address
 shutdown
 negotiation auto
!
router bgp 65000
 bgp router-id 10.1.6.1
 bgp log-neighbor-changes
 neighbor 10.1.5.2 remote-as 65000
 neighbor 10.1.5.3 remote-as 65000
 neighbor 10.1.6.2 remote-as 65000
 neighbor 10.1.6.3 remote-as 65000
 !
 address-family ipv4
  network 0.0.0.0
  redistribute connected
  redistribute static
  neighbor 10.1.5.2 activate
  neighbor 10.1.5.3 activate
  neighbor 10.1.6.2 activate
  neighbor 10.1.6.3 activate
 exit-address-family
!
!
virtual-service csr_mgmt
 vnic gateway VirtualPortGroup0
!
ip nat inside source list 1 interface GigabitEthernet1 overload
ip forward-protocol nd
!
no ip http server
ip http secure-server
ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1
ip route 0.0.0.0 0.0.0.0 172.31.1.1
ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0
!
access-list 1 permit 10.0.0.0 0.255.255.255
!
!
!
control-plane
!
!
line con 0
 stopbits 1
line aux 0
 stopbits 1
line vty 0 4
 login local
 transport input ssh
!         
!
end

Note:

  • Without "mtu 9216" on GigabitEthernet 2 the ping might work but any data transfer eg running command with large output after ssh or opening https:// site will not work.


Here:



Refer:


<yambe:breadcrumb self="Configure NSX-T 3.0 from scratch with edge cluster and tier gateways">VMWare NSX|VMWare NSX</yambe:breadcrumb>