Difference between revisions of "Configure NSX-T 3.0 from scratch with edge cluster and tier gateways"

From Notes_Wiki
m
m
Line 153: Line 153:
==Sample CSRV1000 configuration with single router==
==Sample CSRV1000 configuration with single router==
Ideally there should be two ToR for redundancy.  But in lab we can use a single router.  A single router can also act as "router on a stick" for doing inter-VLAN routing among VLANs 201-206 listed above.  Also router can act as DHCP for VLAN 203 (Host-VTEP).  Sample CSRV1000 which does all this and also does NAT for management (10.1.1.0/24) via external interface for Internet access is:
Ideally there should be two ToR for redundancy.  But in lab we can use a single router.  A single router can also act as "router on a stick" for doing inter-VLAN routing among VLANs 201-206 listed above.  Also router can act as DHCP for VLAN 203 (Host-VTEP).  Sample CSRV1000 which does all this and also does NAT for management (10.1.1.0/24) via external interface for Internet access is:
'''CSRV1000 routers have been found to have very low throughput (<2mbps).  Ideally consider using a Linux machine as router as described later'''
<pre>
<pre>
version 15.4
version 15.4
Line 309: Line 312:
Note:
Note:
* Without "mtu 9216" on GigabitEthernet 2 the ping might work but any data transfer eg running command with large output after ssh or opening https:// site will not work.
* Without "mtu 9216" on GigabitEthernet 2 the ping might work but any data transfer eg running command with large output after ssh or opening https:// site will not work.


Here:
Here:
* Gigabit Ethernet1 is connected to external LAN network (172.31.1.0/24) for Internet connectivity / NAT with IP 172.31.1.173
* Gigabit Ethernet1 is connected to external LAN network (172.31.1.0/24) for Internet connectivity / NAT with IP 172.31.1.173
* Gigabit Ethernet2 is all VLAN trunk port [[Create standard port-group for trunking all VLANs to VM]]
* Gigabit Ethernet2 is all VLAN trunk port [[Create standard port-group for trunking all VLANs to VM]]
==Sample CSRV1000 configuration with two routers==
'''CSRV1000 routers have been found to have very low throughput (<2mbps).  Ideally consider using a Linux machine as router as described later'''
===Router1 configuration===
<pre>
!
! Last configuration change at 17:58:46 UTC Wed Apr 28 2021
!
version 15.4
service timestamps debug datetime msec
service timestamps log datetime msec
no platform punt-keepalive disable-kernel-core
platform console virtual
!
hostname l3-switch
!
boot-start-marker
boot-end-marker
!
!
enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1
!
no aaa new-model
!
!
!
!
!
!
!
!
ip dhcp pool host-overlay
network 10.1.3.0 255.255.255.0
domain-name rnd.com
dns-server 10.1.1.2
default-router 10.1.3.1
lease 7
!
!
!
!       
!
!
!
!
!
!
subscriber templating
multilink bundle-name authenticated
!
!
license udi pid CSR1000V sn 9VA2HK6J7QZ
!
username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0
!
redundancy
mode none
!
!
!
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip scp server enable
!
!
!
!
interface VirtualPortGroup0
ip unnumbered GigabitEthernet1
!
interface GigabitEthernet1
ip address 172.31.1.179 255.255.255.0
ip nat outside
negotiation auto
!
interface GigabitEthernet2
mtu 9216
no ip address
negotiation auto
!       
interface GigabitEthernet2.1
ip nat inside
!
interface GigabitEthernet2.2
encapsulation dot1Q 202
ip address 10.1.2.1 255.255.255.0
!
interface GigabitEthernet2.3
encapsulation dot1Q 203
ip address 10.1.3.1 255.255.255.0
!
interface GigabitEthernet2.4
encapsulation dot1Q 204
ip address 10.1.4.1 255.255.255.0
!
interface GigabitEthernet2.5
encapsulation dot1Q 205
ip address 10.1.5.1 255.255.255.0
ip nat inside
!
interface GigabitEthernet2.201
encapsulation dot1Q 201
ip address 10.1.1.1 255.255.255.0
ip nat inside
!
interface GigabitEthernet3
no ip address
shutdown
negotiation auto
!
router bgp 65001
bgp router-id 10.1.5.1
bgp log-neighbor-changes
neighbor 10.1.5.2 remote-as 65000
neighbor 10.1.5.3 remote-as 65000
!
address-family ipv4
  network 0.0.0.0
  redistribute connected
  redistribute static
  neighbor 10.1.5.2 activate
  neighbor 10.1.5.3 activate
exit-address-family
!
!
virtual-service csr_mgmt
vnic gateway VirtualPortGroup0
!
ip nat inside source list 1 interface GigabitEthernet1 overload
ip forward-protocol nd
!
no ip http server
ip http secure-server
ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1
ip route 0.0.0.0 0.0.0.0 172.31.1.1
ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0
!
access-list 1 permit 10.0.0.0 0.255.255.255
!
!
!
control-plane
!
!
line con 0
logging synchronous
stopbits 1
line aux 0
stopbits 1
line vty 0 4
login local
transport input ssh
!
!
end
</pre>
===Router2 configuration===
<pre>
!
! Last configuration change at 18:18:36 UTC Wed Apr 28 2021 by admin
!
version 15.4
service timestamps debug datetime msec
service timestamps log datetime msec
no platform punt-keepalive disable-kernel-core
platform console virtual
!
hostname l3-switch-right
!
boot-start-marker
boot-end-marker
!
!
enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1
!
no aaa new-model
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
subscriber templating
multilink bundle-name authenticated
!
!
license udi pid CSR1000V sn 94Z2LDZJH7F
!
username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0
!
redundancy
mode none
!
!
!
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip scp server enable
!
!
!
!
interface VirtualPortGroup0
ip unnumbered GigabitEthernet1
!
interface GigabitEthernet1
ip address 172.31.1.180 255.255.255.0
ip nat outside
negotiation auto
!
interface GigabitEthernet2
mtu 9216
no ip address
negotiation auto
!
interface GigabitEthernet2.1
ip nat inside
!
interface GigabitEthernet2.2
encapsulation dot1Q 202
ip address 10.1.2.251 255.255.255.0
!       
interface GigabitEthernet2.3
encapsulation dot1Q 203
ip address 10.1.3.251 255.255.255.0
!
interface GigabitEthernet2.4
encapsulation dot1Q 204
ip address 10.1.4.251 255.255.255.0
!
interface GigabitEthernet2.6
encapsulation dot1Q 206
ip address 10.1.6.1 255.255.255.0
ip nat inside
!
interface GigabitEthernet2.201
encapsulation dot1Q 201
ip address 10.1.1.251 255.255.255.0
ip nat inside
!
interface GigabitEthernet3
no ip address
shutdown
negotiation auto
!
router bgp 65001
bgp router-id 10.1.6.1
bgp log-neighbor-changes
neighbor 10.1.6.2 remote-as 65000
neighbor 10.1.6.3 remote-as 65000
!
address-family ipv4
  network 0.0.0.0
  redistribute connected
  redistribute static
  neighbor 10.1.6.2 activate
  neighbor 10.1.6.3 activate
exit-address-family
!
!
virtual-service csr_mgmt
vnic gateway VirtualPortGroup0
!
ip nat inside source list 1 interface GigabitEthernet1 overload
ip forward-protocol nd
!
no ip http server
ip http secure-server
ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1
ip route 0.0.0.0 0.0.0.0 172.31.1.1
ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0
!
access-list 1 permit 10.0.0.0 0.255.255.255
!
!
!
control-plane
!
!
line con 0
logging synchronous
stopbits 1
line aux 0
stopbits 1
line vty 0 4
login local
transport input ssh
!
!
end
</pre>
==Using Linux machine for BGP, DHCP and inter-VLAN routing==
'''Since CSRV1000 routers have been found to have very low throughput <2mbps, it makes sense to use Linux machine for NAT, inter-VLAN routing, etc. purposes'''
It is possible to create ToR switch functionality required by NSX using Linux machine using:
# Create a CentOS 8 Stream Linux machine with 7 interfaces, 6 interfaces should be in VLAN 201-206 and last interface in VLAN where the router can get LAN / Internet connectivity
# Login into machine and run following command to note MAC addresses and interface names:
#:<pre>
#:: ip addr show
#:</pre>
# Note the VLAN ID and MAC address using vSphere Web UI so that we know which interface is for which VLAN
# Create interface configuration files for VLANs 201-206 similar to at '<tt>/etc/sysconfig/network-scripts/ifcfg-&lt;interface-name&gt;:
#:<pre>
#:: TYPE=Ethernet
#:: BOOTPROTO=static
#:: DEFROUTE=no
#:: IPV4_FAILURE_FATAL=no
#:: IPV6INIT=no
#:: NAME=<interface-name>
#:: DEVICE=<inteface-name>
#:: ONBOOT=yes
#:: IPADDR=10.1.<X>.1
#:: NETMASK=255.255.255.0
#:: ZONE=internal
#:: MTU=9000
#:</pre>
#: Here replace interface-name and X with appropriate value based on name of interface and VLAN subnet (Eg for VLAN 201 X is 1, for VLAN 202 X is 2, etc.)
#: Refer: https://www.cyberciti.biz/faq/centos-rhel-redhat-fedora-debian-linux-mtu-size/
# For the remaining 7th interface which gives connectivity to LAN/Internet use:
#:<pre>
#:: TYPE=Ethernet
#:: BOOTPROTO=static
#:: DEFROUTE=yes
#:: IPV4_FAILURE_FATAL=no
#:: IPV6INIT=no
#:: NAME=<interface-name>
#:: DEVICE=<inteface-name>
#:: ONBOOT=yes
#:: IPADDR=<lan-ip>
#:: NETMASK=<lan-subnet-mask>
#:: GATEWAY=<lan-gateway>
#:: DNS1=4.2.2.2
#:: ZONE=external
#:</pre>
#: Here replace interface-name, lan-* and DNS etc. appropriately as required.  Avoiding use of 10.1.1.2 DNS on this machine to prevent cyclic dependency.  10.1.1.2 jump box depends upon this machine for Internet access.
# Install and configure openbgpd on machine using:
## Install openbgpd using:
##:<pre>
##:: dnf -y install epel-release
##:: dnf -y install openbgpd
##:</pre>
## Configure openbgpd as follows by editing /etc/bgpd.conf
##:<pre>
##:: ASN="65001"
##::
##:: AS $ASN
##:: router-id 10.1.6.1
##::
##:: prefix-set mynetworks {
##::    0.0.0.0/0
##:: 10.1.1.0/24
##:: 10.1.2.0/24
##:: 10.1.3.0/24
##:: 10.1.4.0/24
##:: 10.1.5.0/24
##:: 10.1.6.0/24
##:: }
##::
##:: #Commented below so that the same range can be used in BGP
##:: prefix-set bogons {
##:: # 0.0.0.0/8 or-longer # 'this' network [RFC1122]
##:: # 10.0.0.0/8 or-longer # private space [RFC1918]
##:: # 172.16.0.0/12 or-longer # private space [RFC1918]
##::
##:: #Comment complete 'group "ibgp mesh" {' block till ending }
##::
##:: #Define neighbors using:
##:: group "upstreams" {
##:: neighbor 10.1.5.2 {
##:: remote-as 65000
##:: descr "10.1.5.2"
##:: }
##:: neighbor 10.1.5.3 {
##:: remote-as 65000
##:: descr "10.1.5.3"
##:: }
##:: neighbor 10.1.6.2 {
##:: remote-as 65000
##:: descr "10.1.6.2"
##:: }
##:: neighbor 10.1.6.3 {
##:: remote-as 65000
##:: descr "10.1.6.3"
##:: }
##::}
##:</pre>
##: Refer https://man.openbsd.org/bgpd.conf
##: '''Note that this is not resulting into routes from local router being learned at NSX side or NSX routes being learned on local machine.  This is only to satisfy BGP requirement.  For communication unfortunatley static route for reaching segments (10.1.7.0/24 or 10.1.8.0/24) via 10.1.5.2 or 10.1.6.2 need to be added.
## Start and enable bgpd using:
##:<pre>
##:: systemctl start bgpd
##:: systemctl enable bgpd
##:</pre>




Line 319: Line 740:


Refer:
Refer:
* https://www.youtube.com/playlist?list=PLvjREERAnGxJctJOLLTXN9Z77_9g9at7o - Excellent youtube channel referred to learn many of above concepts
* https://www.youtube.com/playlist?list=PLvjREERAnGxJctJOLLTXN9Z77_9g9at7o - '''Excellent youtube channel referred to learn many of above concepts'''






<yambe:breadcrumb self="Configure NSX-T 3.0 from scratch with edge cluster and tier gateways">VMWare NSX|VMWare NSX</yambe:breadcrumb>
<yambe:breadcrumb self="Configure NSX-T 3.0 from scratch with edge cluster and tier gateways">VMWare NSX|VMWare NSX</yambe:breadcrumb>

Revision as of 15:41, 1 May 2021

<yambe:breadcrumb self="Configure NSX-T 3.0 from scratch with edge cluster and tier gateways">VMWare NSX|VMWare NSX</yambe:breadcrumb>

Configure NSX-T 3.0 from scratch with edge cluster and tier gateways

As such NSX configuration is very specific to individual needs and having a fixed set of steps is not practical. But for lab / learning purposes one can go through below steps to get a working NSX-T 3.0 setup with edge cluster and Tier gateways (Both T0, T1) which talk to ToR using BGP.

Pre-requisites

  1. Following values / pre-requisites are required before we can go ahead with the NSX-T 3.0 setup with BGP peering with ToR switches:
    • NTP for the setup
    • DNS to resolve various FQDN for the setup
    • NSX-T, vCenter, ESXi licenses
    • ESXi hosts IP, FQDN, Management VLAN, vMotion VLAN, Shared storage/vSAN
    • vCenter IP, FQDN, VLAN
    • NSX-T manager management IP
    • VLAN for Host VTEP communication - Needs DHCP
    • VLAN for Edge VTEP communication
    • VLAN for left ToR Edge Uplink (North-South)
    • VLAN for right ToR Edge Uplink (North-South)
    • Two IPs for edge in management VLAN with DNS entries
    • Two IPs per edge (4 for 2 edges) in edge VTEP communication VLAN
    • Two IPs per edge (one in left ToR Edge uplink VLAN and other in right ToR Edge uplink VLAN)
    • BGP configuration on both left and right ToR in respective uplink VLANs


Example values used for below steps

  1. For example in below steps we are going to use below values
    NTP
    time.google.com
    DNS
    10.1.1.2 #A bind DNS specifically setup for rnd.com for lab experiments with 10.1.1.2 IP address Refer Configuring basic DNS service with bind. This DNS has named.conf and rnd.com.forward files as listed in below article.
    ESXi hosts IPs
    10.1.1.11 to 10.1.1.14
    ESXi hosts FQDN
    esxi01.rnd.com to esxi04.rnd.com
    Management VLAN
    201 - 10.1.1.0/24
    vMotion VLAN
    202 - 10.1.2.0/24. ESXi hosts will use IPs 10.1.2.11 to 10.1.2.14 in this VLAN for vMotion.
    Shared storage
    Created NFS at 10.1.1.41 using CentOS 7.x Cloudstack 4.11 Setup NFS for secondary and primary (if no central storage) storage
    vCenter Details
    IP: 10.1.1.3, FQDN: vcenter.rnd.com, VLAN:201 Management VLAN
    NSX-T maanger management IP
    10.1.1.4, FQDN: nsxt.rnd.com
    VLAN for Host VTEP communication
    VLAN 203 - 10.1.3.0/24. For this we can setup DHCP by adding a Linux machine with DHCP service to the network. Or we can setup DHCP service on ToR switches. Refer: Configure DHCP server on Cisco router
    VLAN for Edge VTEP communication
    VLAN 204 - 10.1.4.0/24
    VLAN for left ToR Edge Uplink (North-South)
    VLAN 205 - 10.1.5.0/24
    VLAN for right ToR Edge Uplink (North-South)
    VLAN 206 - 10.1.6.0/24
    Two IPs for edge in management VLAN with DNS entries
    edge01.rnd.com ( 10.1.1.21) and edge02.rnd.com (10.1.1.22)
    Two IPs per edge (4 for 2 edges) in edge VTEP communication VLAN
    10.1.4.11/24 to 10.1.4.14/24
    Two IPs per edge (one in left ToR Edge uplink VLAN and other in right ToR Edge uplink VLAN)
    10.1.5.2/24, 10.1.5.3/24, 10.1.6.2/24 and 10.1.6.3/24
    BGP configuration on both left and right ToR in respective uplink VLANs
    For this please refer to sample single CSRV1000 router configuration given below.


Setup ESXi, NFS, DNS and vCenter

  1. After this setup four ESXi hosts (esxi01.rnd.com to esxi04.rnd.com) for nested setup using all trunk VLAN ports with management IP in VLAN 201 - 10.1.1.11 to 10.1.1.14. Note point about VMkernel MAC address at Install nested ESXi on top of ESXi in a VM
  2. Create DNS machine with IP 10.1.1.2 connected to VLAN 201 for DNS resolution. Refer Configuring basic DNS service with bind.
  3. Create NFS machine with IP 10.1.1.41 for shared storage among all four ESXi hosts. Refer CentOS 7.x Cloudstack 4.11 Setup NFS for secondary and primary (if no central storage) storage
  4. Change "VM Network" VLAN to 201 in all four ESXi hosts. This will help in reachbility to vCenter when it is deployed on 'VM Network' later.
  5. Add NFS datastore from 10.1.1.41 to esxi01.rnd.com so that we can deploy vCenter on this shared NFS datastore
  6. Create a few management stations 10.1.1.31 (Linux) and 10.1.1.32 (Windows) in VLAN 201 for nested lab operations. For both these machines we need an interface in normal LAN (eg 172.31.1.0/24 suggested here) without gateway for connecting to these admin machines from LAN machines in 172.31.1.0/24 subnet. Gateway for these management stations should be 10.1.1.1 so that these machines can access all subnets via L3 switch / router.
  7. Deploy vCenter using a Windows management station at IP 10.1.1.3 / FQDN-vcenter.rnd.com on top of nested ESXi (esxi01.rnd.com) using NFS datastore and VM network.
  8. Once vCenter for nested lab experiments is available do the following:
    1. Add all four ESXi hosts among two clusters - Computer cluster (esxi01, esxi02) and Edge cluster (esxi03, esxi04)
    2. Add ESXi and vCenter licenses to vCenter. Assign these licenses.
    3. Create distributed switch using two ports out of four of all four nested ESXi VMs.
      1. Create distributed port group for management using VLAN 201
      2. Migrate vCenter and vmk0 kernel port to this distributed switch.
      3. Later migrate the remaining two uplinks also to distributed switch.
      4. Delete standard switch from all four hosts
    4. Configure MTU of at least 1700 on this distributed switch in nested LAB. The MTU on external ESXi host switch hosting these nested ESXi VMs should be higher (ideally 9000+). If multiple ESXi hosts are used for lab setup then these hosts should have VLAN connectivity for all required VLANs (including 201-206 listed above) and MTU 9000+
    5. Create distributed port-group for ALL-VLAN-TRUNK on distributed switch using "VLAN Trunnking" and 0-4096 options
    6. Create distributed port-group for vMotion (VLAN 202) and add VMKernel ports (10.1.2.11 to 10.1.2.14) on all four ESXi hosts for vMotion
    7. Mount NFS datastore from 10.1.1.41 mounted on ESXi on other 3 hosts.


Setup NSX-T Manager and Edge cluster

  1. Deploy NSX-T manager using IP 10.1.1.4 - FQDN nsxt.rnd.com on maangement port group from admin stations (10.1.1.31 or 10.1.1.32). This NSX manager can be deployed on top of esxi02.rnd.com
    Dont choose size as extra small. That is for different purpose altogether.
    EnableS SSH to NSX-T manager
  2. Add NSX-T license to NSX-T manager and add go to System -> Fabric -> Computer manager and integrated NSX manager with vCenter.
  3. Go to System -> Fabric -> Transport Zones and create Transport zone one for edge communication using Traffic Type VLAN (edge-vlan-tz) and one for management communication using Traffic Type: overlays (mg-overlay-tz)
  4. Go to System -> Fabric -> Profiles. Create uplink profile with load-balance source with 4 active uplinks (u1, u2, u3, u4) in VLAN 203 for host-VTEP communication
  5. Go to System -> Fabric -> Nodes -> Host Transport Nodes. Select managed by vcenter.rnd.com. Configure NSX on computer cluster hosts (esxi01 and esxi02) using management overlay (mg-overlay-tz) transport zone and uplink profile with 4 uplinks in VLAN 203 (host-VTEP) with DHCP based IPs. Assign all four uplinks of Existing vDS to this host. No need to go for N-VDS or enhanced data path
  6. Go to System -> Fabric -> Nodes -> Edge Clusters. Create Edge cluster with name edgecluster01
  7. Go to System -> Fabric -> Profiles. Create uplink profile for edge with VLAN 204 (Edge vtep) with two uplinks in load-balance source
  8. Go to Networking -> Segments and create segments for edge uplink. left-uplink segment and for transport zone choose edge-vlan-tz. Specify VLAN as 205 for left-uplink-vlan.
    Similarly crete right-uplink segment with edge-vlan-tz and VLAN ID of 206.
  9. Go to System -> Fabric -> Nodes -> Edge Transport Nodes. Create edge (edge01) with management IP 10.1.1.21 - FQDN edge01.rnd.com with both uplink as ALL-VLANS-Trunk-DPG. Use the edge uplink profile created in previous steps. Use 10.1.4.11 and 10.1.4.12 as VTEP IPs for Edge in 204 VLAN. Both mg-overlay-tz and edge-vlan-tz transport zones should be selected for edge.
    Create this without resource (memory) reservation. Use edge cluster created above for deploying this.
  10. Similarly created edge (edge02) with management IP 10.1.1.22 - FQDN edge02.rnd.com with both uplinks as ALL-VLANS-Trunk-DPG. Use the edge uplink profile created in previous steps. Use 10.1.4.13 and 10.1.4.14 as VTEP IPs for Edge in 204 VLAN.
  11. Once both edges are ready add them to edgecluster
  12. SSH to edge as admin and run "get logical-routers". Go to first vrf 0 and check interfaces. "get interfaces". Try to ping gateway 10.1.4.1 to validate edge VTEP connectivity to VTEP VLAN gateway.


Setup T0 and T1 gateway routers with overlay segments

  1. Go to Networking -> T0-Gateways
  2. Create t0-gw as active-standby on edgecluster01. Click save
  3. Add interfaces edge01-left-uplink with IP 10.1.5.2/24 connected to left-uplink segment. Select edg01 as edge node.
    Similarly create interface edge01-right-uplink with IP 10.1.6.2/24 connected to right-uplink segment. Select edge01 as edge node.
  4. Add interface edge02-left-uplink with IP 10.1.5.3/24 connected to left-uplink segment. Select edge02 as edge node.
    Similarly create interface edge02-right-uplink with IP 10.1.6.3/24 connected to right-uplink segement. Select edge02 as right node.
  5. Go to BGP and configure AS 65000. Enable BGP.
  6. Add BGP neighbor 10.1.5.1 with remote as 65000 with source IPs 10.1.5.2 and 10.1.5.3
    Similarly add BGP neighbor 10.1.6.1 with remote as 65000 with source IPs 10.1.6.2 and 10.1.6.3
  7. Add route-redistribution for all-routes. Select at least NAT-IP and "Connected Interfaces and Segments (including subtree)" under T0 gateway
    Select "Connected Interfaces and segments" and NAT-IP for Tier-1 gateways
  8. SSH to edge and do "get logical-routers". There should be service router for T0-gateway at vrf 1. Enter vrf 1
    Validate bgp connection to ToR have worked using "get bgp neighbor summary"
    Look at bgp routes using "get bgp"
  9. Go to Networking -> T1-Gateways
  10. Add tier1 gateway such as t1-prod with connectivity to t0-gw created before. Select edge cluster as edgecluster1. By selecting edge cluster we get service router for the t1-gateway. This is useful for NAT / L2 bridging etc. Click save.
    Go to router advertizements and advertize routes for "All connected segments and service ports"
  11. Go to Networking -> Sgements. Add Segment such as app-seg with connnectiity to t1-prod and transport zone as mg-overlay-tz. Give subnet as 10.1.7.1/24. This will become default-gw for this overlay in ditributed router.
    Add another segment web-seg with connectivity to t1-prod and transport zone as mg-overlay-tz. Give subnet as 10.1.8.1/24.
  12. Now these two new overlay segments 10.1.7.0/24 and 10.1.8.0/24 should be reachable from admin stations 10.1.1.31 and 10.1.1.32. We should see their related routing on service router for t0-gateway or ToR



Input files to help with test setup

rnd.com zone in named.conf for dns.rnd.com

Only rnd.com zone related lines of named.conf are captured below:

zone "rnd.com" IN {
        type master;
        file "rnd.com.forward";
};

For full steps refer Configuring basic DNS service with bind


rnd.com.forward for dns.rnd.com

rnd.com.forward file for rnd.com zone is:

$TTL 3600
@ SOA ns.rnd.com. root.rnd.com. (1 15m 5m 30d 1h)
	NS dns.rnd.com.
	A 10.1.1.2
l3switch	IN	A	10.1.1.1
dns             IN      A       10.1.1.2
vcenter         IN      A       10.1.1.3
nsxt            IN      A       10.1.1.4
nsxt1           IN      A       10.1.1.5
nsxt2           IN      A       10.1.1.6
nsxt3           IN      A       10.1.1.7
esxi01		IN	A	10.1.1.11
esxi02		IN	A	10.1.1.12
esxi03		IN	A	10.1.1.13
esxi04		IN	A	10.1.1.14
edge01		IN	A	10.1.1.21
edge02		IN	A	10.1.1.22
admin-machine	IN	A	10.1.1.31
admin-machine-windows	IN	A	10.1.1.32
nfs		IN	A	10.1.1.41


Sample CSRV1000 configuration with single router

Ideally there should be two ToR for redundancy. But in lab we can use a single router. A single router can also act as "router on a stick" for doing inter-VLAN routing among VLANs 201-206 listed above. Also router can act as DHCP for VLAN 203 (Host-VTEP). Sample CSRV1000 which does all this and also does NAT for management (10.1.1.0/24) via external interface for Internet access is:

CSRV1000 routers have been found to have very low throughput (<2mbps). Ideally consider using a Linux machine as router as described later

version 15.4
service timestamps debug datetime msec
service timestamps log datetime msec
no platform punt-keepalive disable-kernel-core
platform console virtual
!
hostname l3-switch
!
boot-start-marker
boot-end-marker
!
!
enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1
!
no aaa new-model
!
!
!
!
!
!
!


!
ip dhcp pool host-overlay
 network 10.1.3.0 255.255.255.0
 domain-name rnd.com
 dns-server 10.1.1.2 
 default-router 10.1.3.1 
 lease 7
!
!
!
!
!
!
!
!
!
!
subscriber templating
multilink bundle-name authenticated
!
!
license udi pid CSR1000V sn 91E1JKLD4F3
!         
username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0
!
redundancy
 mode none
!
!
!
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip scp server enable
!
!
!
!
interface VirtualPortGroup0
 ip unnumbered GigabitEthernet1
!
interface GigabitEthernet1
 ip address 172.31.1.173 255.255.255.0
 ip nat outside
 negotiation auto
!
interface GigabitEthernet2
 no ip address
 negotiation auto
 mtu 9216
!
interface GigabitEthernet2.1
 encapsulation dot1Q 201
 ip address 10.1.1.1 255.255.255.0
 ip nat inside
!         
interface GigabitEthernet2.2
 encapsulation dot1Q 202
 ip address 10.1.2.1 255.255.255.0
!
interface GigabitEthernet2.3
 encapsulation dot1Q 203
 ip address 10.1.3.1 255.255.255.0
!
interface GigabitEthernet2.4
 encapsulation dot1Q 204
 ip address 10.1.4.1 255.255.255.0
!
interface GigabitEthernet2.5
 encapsulation dot1Q 205
 ip address 10.1.5.1 255.255.255.0
!
interface GigabitEthernet2.6
 encapsulation dot1Q 206
 ip address 10.1.6.1 255.255.255.0
!
interface GigabitEthernet3
 no ip address
 shutdown
 negotiation auto
!
router bgp 65000
 bgp router-id 10.1.6.1
 bgp log-neighbor-changes
 neighbor 10.1.5.2 remote-as 65000
 neighbor 10.1.5.3 remote-as 65000
 neighbor 10.1.6.2 remote-as 65000
 neighbor 10.1.6.3 remote-as 65000
 !
 address-family ipv4
  network 0.0.0.0
  redistribute connected
  redistribute static
  neighbor 10.1.5.2 activate
  neighbor 10.1.5.3 activate
  neighbor 10.1.6.2 activate
  neighbor 10.1.6.3 activate
 exit-address-family
!
!
virtual-service csr_mgmt
 vnic gateway VirtualPortGroup0
!
ip nat inside source list 1 interface GigabitEthernet1 overload
ip forward-protocol nd
!
no ip http server
ip http secure-server
ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1
ip route 0.0.0.0 0.0.0.0 172.31.1.1
ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0
!
access-list 1 permit 10.0.0.0 0.255.255.255
!
!
!
control-plane
!
!
line con 0
 stopbits 1
line aux 0
 stopbits 1
line vty 0 4
 login local
 transport input ssh
!         
!
end

Note:

  • Without "mtu 9216" on GigabitEthernet 2 the ping might work but any data transfer eg running command with large output after ssh or opening https:// site will not work.

Here:


Sample CSRV1000 configuration with two routers

CSRV1000 routers have been found to have very low throughput (<2mbps). Ideally consider using a Linux machine as router as described later

Router1 configuration

!
! Last configuration change at 17:58:46 UTC Wed Apr 28 2021
!
version 15.4
service timestamps debug datetime msec
service timestamps log datetime msec
no platform punt-keepalive disable-kernel-core
platform console virtual
!
hostname l3-switch
!
boot-start-marker
boot-end-marker
!
!
enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1
!
no aaa new-model
!
!
!
!
!
!
!


!
ip dhcp pool host-overlay
 network 10.1.3.0 255.255.255.0
 domain-name rnd.com
 dns-server 10.1.1.2 
 default-router 10.1.3.1 
 lease 7
!
!
!
!         
!
!
!
!
!
!
subscriber templating
multilink bundle-name authenticated
!
!
license udi pid CSR1000V sn 9VA2HK6J7QZ
!
username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0
!
redundancy
 mode none
!
!
!
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip scp server enable
!
!
!
!
interface VirtualPortGroup0
 ip unnumbered GigabitEthernet1
!
interface GigabitEthernet1
 ip address 172.31.1.179 255.255.255.0
 ip nat outside
 negotiation auto
!
interface GigabitEthernet2
 mtu 9216
 no ip address
 negotiation auto
!         
interface GigabitEthernet2.1
 ip nat inside
!
interface GigabitEthernet2.2
 encapsulation dot1Q 202
 ip address 10.1.2.1 255.255.255.0
!
interface GigabitEthernet2.3
 encapsulation dot1Q 203
 ip address 10.1.3.1 255.255.255.0
!
interface GigabitEthernet2.4
 encapsulation dot1Q 204
 ip address 10.1.4.1 255.255.255.0
!
interface GigabitEthernet2.5
 encapsulation dot1Q 205
 ip address 10.1.5.1 255.255.255.0
 ip nat inside
!
interface GigabitEthernet2.201
 encapsulation dot1Q 201
 ip address 10.1.1.1 255.255.255.0
 ip nat inside
!
interface GigabitEthernet3
 no ip address
 shutdown
 negotiation auto
!
router bgp 65001
 bgp router-id 10.1.5.1
 bgp log-neighbor-changes
 neighbor 10.1.5.2 remote-as 65000
 neighbor 10.1.5.3 remote-as 65000
 !
 address-family ipv4
  network 0.0.0.0
  redistribute connected
  redistribute static
  neighbor 10.1.5.2 activate
  neighbor 10.1.5.3 activate
 exit-address-family
!
!
virtual-service csr_mgmt
 vnic gateway VirtualPortGroup0
!
ip nat inside source list 1 interface GigabitEthernet1 overload
ip forward-protocol nd
!
no ip http server
ip http secure-server
ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1
ip route 0.0.0.0 0.0.0.0 172.31.1.1
ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0
!
access-list 1 permit 10.0.0.0 0.255.255.255
!
!
!
control-plane
!
!
line con 0
 logging synchronous
 stopbits 1
line aux 0
 stopbits 1
line vty 0 4
 login local
 transport input ssh
!
!
end


Router2 configuration

!
! Last configuration change at 18:18:36 UTC Wed Apr 28 2021 by admin
!
version 15.4
service timestamps debug datetime msec
service timestamps log datetime msec
no platform punt-keepalive disable-kernel-core
platform console virtual
!
hostname l3-switch-right
!
boot-start-marker
boot-end-marker
!
!
enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1
!
no aaa new-model
!
!
!
!
!
!
!


!
!
!
!
!
!
!
!
!
!
subscriber templating
multilink bundle-name authenticated
!
!
license udi pid CSR1000V sn 94Z2LDZJH7F
!
username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0
!
redundancy
 mode none
!
!
!
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip scp server enable
!
!
!
!
interface VirtualPortGroup0
 ip unnumbered GigabitEthernet1
!
interface GigabitEthernet1
 ip address 172.31.1.180 255.255.255.0
 ip nat outside
 negotiation auto
!
interface GigabitEthernet2
 mtu 9216
 no ip address
 negotiation auto
!
interface GigabitEthernet2.1
 ip nat inside
!
interface GigabitEthernet2.2
 encapsulation dot1Q 202
 ip address 10.1.2.251 255.255.255.0
!         
interface GigabitEthernet2.3
 encapsulation dot1Q 203
 ip address 10.1.3.251 255.255.255.0
!
interface GigabitEthernet2.4
 encapsulation dot1Q 204
 ip address 10.1.4.251 255.255.255.0
!
interface GigabitEthernet2.6
 encapsulation dot1Q 206
 ip address 10.1.6.1 255.255.255.0
 ip nat inside
!
interface GigabitEthernet2.201
 encapsulation dot1Q 201
 ip address 10.1.1.251 255.255.255.0
 ip nat inside
!
interface GigabitEthernet3
 no ip address
 shutdown
 negotiation auto
!
router bgp 65001
 bgp router-id 10.1.6.1
 bgp log-neighbor-changes
 neighbor 10.1.6.2 remote-as 65000
 neighbor 10.1.6.3 remote-as 65000
 !
 address-family ipv4
  network 0.0.0.0
  redistribute connected
  redistribute static
  neighbor 10.1.6.2 activate
  neighbor 10.1.6.3 activate
 exit-address-family
!
!
virtual-service csr_mgmt
 vnic gateway VirtualPortGroup0
!
ip nat inside source list 1 interface GigabitEthernet1 overload
ip forward-protocol nd
!
no ip http server
ip http secure-server
ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1
ip route 0.0.0.0 0.0.0.0 172.31.1.1
ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0
!
access-list 1 permit 10.0.0.0 0.255.255.255
!
!
!
control-plane
!
!
line con 0
 logging synchronous
 stopbits 1
line aux 0
 stopbits 1
line vty 0 4
 login local
 transport input ssh
!
!
end


Using Linux machine for BGP, DHCP and inter-VLAN routing

Since CSRV1000 routers have been found to have very low throughput <2mbps, it makes sense to use Linux machine for NAT, inter-VLAN routing, etc. purposes It is possible to create ToR switch functionality required by NSX using Linux machine using:

  1. Create a CentOS 8 Stream Linux machine with 7 interfaces, 6 interfaces should be in VLAN 201-206 and last interface in VLAN where the router can get LAN / Internet connectivity
  2. Login into machine and run following command to note MAC addresses and interface names:
    ip addr show
  3. Note the VLAN ID and MAC address using vSphere Web UI so that we know which interface is for which VLAN
  4. Create interface configuration files for VLANs 201-206 similar to at '/etc/sysconfig/network-scripts/ifcfg-<interface-name>:
    TYPE=Ethernet
    BOOTPROTO=static
    DEFROUTE=no
    IPV4_FAILURE_FATAL=no
    IPV6INIT=no
    NAME=<interface-name>
    DEVICE=<inteface-name>
    ONBOOT=yes
    IPADDR=10.1.<X>.1
    NETMASK=255.255.255.0
    ZONE=internal
    MTU=9000
    Here replace interface-name and X with appropriate value based on name of interface and VLAN subnet (Eg for VLAN 201 X is 1, for VLAN 202 X is 2, etc.)
    Refer: https://www.cyberciti.biz/faq/centos-rhel-redhat-fedora-debian-linux-mtu-size/
  5. For the remaining 7th interface which gives connectivity to LAN/Internet use:
    TYPE=Ethernet
    BOOTPROTO=static
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=no
    NAME=<interface-name>
    DEVICE=<inteface-name>
    ONBOOT=yes
    IPADDR=<lan-ip>
    NETMASK=<lan-subnet-mask>
    GATEWAY=<lan-gateway>
    DNS1=4.2.2.2
    ZONE=external
    Here replace interface-name, lan-* and DNS etc. appropriately as required. Avoiding use of 10.1.1.2 DNS on this machine to prevent cyclic dependency. 10.1.1.2 jump box depends upon this machine for Internet access.
  6. Install and configure openbgpd on machine using:
    1. Install openbgpd using:
      dnf -y install epel-release
      dnf -y install openbgpd
    2. Configure openbgpd as follows by editing /etc/bgpd.conf
      ASN="65001"
      AS $ASN
      router-id 10.1.6.1
      prefix-set mynetworks {
      0.0.0.0/0
      10.1.1.0/24
      10.1.2.0/24
      10.1.3.0/24
      10.1.4.0/24
      10.1.5.0/24
      10.1.6.0/24
      }
      #Commented below so that the same range can be used in BGP
      prefix-set bogons {
      # 0.0.0.0/8 or-longer # 'this' network [RFC1122]
      # 10.0.0.0/8 or-longer # private space [RFC1918]
      # 172.16.0.0/12 or-longer # private space [RFC1918]
      #Comment complete 'group "ibgp mesh" {' block till ending }
      #Define neighbors using:
      group "upstreams" {
      neighbor 10.1.5.2 {
      remote-as 65000
      descr "10.1.5.2"
      }
      neighbor 10.1.5.3 {
      remote-as 65000
      descr "10.1.5.3"
      }
      neighbor 10.1.6.2 {
      remote-as 65000
      descr "10.1.6.2"
      }
      neighbor 10.1.6.3 {
      remote-as 65000
      descr "10.1.6.3"
      }
      }
      Refer https://man.openbsd.org/bgpd.conf
      Note that this is not resulting into routes from local router being learned at NSX side or NSX routes being learned on local machine. This is only to satisfy BGP requirement. For communication unfortunatley static route for reaching segments (10.1.7.0/24 or 10.1.8.0/24) via 10.1.5.2 or 10.1.6.2 need to be added.
    3. Start and enable bgpd using:
      systemctl start bgpd
      systemctl enable bgpd




Refer:


<yambe:breadcrumb self="Configure NSX-T 3.0 from scratch with edge cluster and tier gateways">VMWare NSX|VMWare NSX</yambe:breadcrumb>