VCloud foundation fully automated deployment with bringup using vlc

From Notes_Wiki

Home > VMWare platform > vCloud Foundation > Setup automated lab environment for vCloud Foundation using VLC > vCloud foundation fully automated deployment with bringup using vlc

vlc ( vCF Lab Constructor ) is powerCLI (VMWare powershell module for managing ESXi hosts / vCenter etc.) script for configuring nested setup for vCloud Foundation. Since vCloud Foundation is not setup step by step manually (Install ESXi, Install vCenter, etc.) but is instead built automatically using cloud builder and an input parameters file, it is not easy to get this setup in lab or nested lab, unless you have validated hardware nodes available as spare. With help of vlc we can setup nested (ESXi installed in a VM on top of existing ESXi host) vCF for experiments.

The below document is based on VLC 4.2

For this following pre-requisites are required:

  1. An ESXi host with ESXi 6.7+ with 12 cores, 128 GB RAM and 800 GB disk space. It would be best if the disk is SSD
  2. A Windows 10 VM or use enterprise evaluation license if required
  3. VMWare Licenses for
    4 socket license
    4 socket license
    1 standard license
    4 socket license
    SDDC Manager
    4 socket license. This is not required during the deployment as it is not asked in the vlc configuration input json, but can be added later on manually

If the above mentioend pre-requisites are available then to get fully automated vCF including build-up using VLC use below steps:

  1. Configure ESXi host switch with MTU 9000
  2. Create at least one trunked port-group with all VLANs trunked Create standard port-group for trunking all VLANs to VM
  3. On the above all-vlan-trunk port-group make sure all the three security settings are set to accept:
    • Promiscuous Mode = Accept
    • Allow Forged Transmits = Accept
    • Allow MAC Address Changes = Accept
    This is required even if the settings are set to accept at switch level. Override them at port-group level and set them to accept again.
  4. Disable HA, DRS at cluster level. Optionally also disable vMotion service on vmkernel port
  5. Build a Windows 10 VMs with one NIC connected to current LAN.
    1. Install VMWare tools on Windows VM and reboot.
    2. Add second NIC of type "vmxnet3" connected to ALL-VLAN-TRUNK port-group
    3. Install following software on Windows host
      1. Dotnet framework offline installer 4.8 for Windows
      2. VMWare ovf tool 4.4.1
      3. Download offline bundle of VMWare powerCLI 12.1 and install it using:
        1. Open powershell with administrator privileges (Right click on powershell launch icon and choose "Run as administrator")
        2. Execute command
        3. Based on the output of above command go to one of the modules directory (Create it if required) and extract offline downloaded powerCLI 12.1 zip file contents in this location. After extracting you should have 10+ subfolders named VMWare.* inside the Modules folder.
        4. In the administrator powershell cd to the modules folder where you have extracted the offline bundle.
        5. Execute following to unblock extracted modules
          Get-ChildItem * -Recurse | Unblock-File
        6. Check the version of powerCLI that got installed using:
          Get-Module -Name VMware.PowerCLI -ListAvailable
    4. On the second NIC of Windows host (Connected to all VLAN trunk port) set IP with DNS as without any gateway
    5. On second NIC of Windows host configure VLAN as 10. Right click on adapter go to properties -> Click on configure -> Go to Advanced tab -> Go to VLAN ID property and set its value to 10
    6. Download VLC by filling google form ( ) or using direct download link
    7. Extract the Zip file at C:\VLC
    8. Download latest cloud builder appliance (VMWare Cloud Builder 4.2.0 at time of this writing) and cop the OVA to C:\VLC folder
    9. Edit AUTOMATED_AVN_VCF_VLAN_10-13_NOLIC_v42 file and set the four licenses (ESXi, vSAN, vCenter and NSX) at appropriate places in the document. Do not modify anything else.
    10. Run "VLCGUi.ps1" in powershell.
    11. Choose automated option for the deployment.
    12. Give vCenter of ESXi host details for connection in the right side fields and try connect
    13. Based on connection choose cluster, network (ALL-VLAN-Trunk-port-group created earlier) and datastore for vCF node deployment
    14. Select the cloud builder ova
    15. Click Validate
    16. Once validation is successful click Construct to build the nested vCF setup.
      On 6 2xTB magnetic disks based datastore in RAID 5 (10TB usable) with 256 GB RAM server with 32 cores, it took about 4 hours for deployment to complete.

Study of the automated deployment

Once automated deployment is complete note following components to understand it in better way:

Physical lab deployment

On actual physical lab infrastructure we can see following deployments:

  1. VMs: CB-01a (Cloud builder), esxi-1, esxi-2, esxi-3, esxi-4
  2. In datastore folder ISO: with VLC_vsphere.iso file
  3. On jumpbox under C:\VLC - Logs and cb_esx_iso folders

To start fresh delete all of these and launch a new deployment

Cloud Builder services

Access cloud builder using SSH to IP from jump box. Login using username:password admin:VMware123!

Following services are deployed on cloud builder at IP in case of fully automated deployment:

/etc/maradns/db.vcf.ssdc.lab; /etc/dwood3rc; systemctl restart maradns; systemctl restart maradns.deadwood
L3 inter-VLAN routing
ip addr show; sysctl net.ipv4.ip_forward; iptables -L

AVN and NO_AVN json differences

The only difference between AVN and NON-AVN json files is in the last line where NO_AVN json file has exclusion for AVN and EBGP. 'diff' output of two json files is:

<   "excludedComponents": ["NSX-V"]
>   "excludedComponents": ["NSX-V", "AVN", "EBGP"]

Values in json file and where there impact can be seen in final deployment

Following values are worth noting in the automated deployment json config files: The corresponding place in deployment where these are reflected are also mentioned.

  • sddcManagerSpec:
  • networkSpecs:
    We can see various networks at vCenter ( level via distributed switch (sddc-vds01)
    MANAGEMENT subnet
    Gateway (Cloud builder apppliance)
    There is option while doing fully automated deployment to specify EXT_GW. If we leave it blank the default route in Cloud builder appliance is:
    default via dev eth0.10 proto static
    perhaps once we specify EXT_GW value in the automated form this gets upated to specified value.
    10 -- VLAN ID for all other portgroups seems to be 10. Hence various different subnets are being broadcasted / shared within a single VLAN.
    1500 -- VDS MTU is 9000. Then for vmkernel ports in respective VLANs / networks, the MTU is set as per configuration. Hence vmkernel ports in management network have 1500 MTU.
    VMOTION subnet
    VMOTION mtu
    8940 (Same for other later subnets)
    vSAN subnet
    vMotion and vSAN networks are defined as part of mgmt-network-pool as part of SDDC manager network settings
    UPLINK01 - vlanId
    UPLINK01 - subnet
    UPLINK02 - vlanId
    UPLINK02 - subnet
    NSXT_EDGE_TEP - vlanId
    NSXT_EDGE_TEP - subnet
    Note that sddc-edge-uplink01 and sddc-edge-uplink02 networks are all VLAN trrunk and not limited to VLAN IDs 11,12. This is expected so that edge can do L2 bridging between VLAN and segment if required. We can see VCF-edge_mgmt-edge-cluster_segment_11 and VCF-edge_mgmt-edge-cluster_segment_11 in NSX Manager ( admin:VMware123!VMware123!) - Networking -> Segments
    X_REGION - subnet
    X_REGION - vlanId
    REGION_SPECIFIC - subnet
    REGION_SPECIFIC - vlanId: 0
    We can see two segments in NSX Manager ( admin:VMware123!VMware123!):
    Mgmt-RegionA01-VXLAN - - Connected to Tier1 (mgmt-domain-tier1-gateway)
    Mgmt-xRegion01-VXLAN - - Connected to Tier1 (mgmt-domain-tier1-gateway)
    Both of type overlay connected to mgmt-domain-m01-overlay-tz transport zone. The purpose of creating two different segments with chosen naming is not clear.

  • nsxtSpec:
    We can see VIP by going to system -> Appliances -> NSX Manager in NSX Manager which can be accessed at both or IPs.
    We can see use of transport VLAN ID by going to System -> Fabric -> Profiles and looking host-uplink-mgmt-cluster profile with Transport VLAN 10
    ASN can be seen at Networking -> Tier-0 Gateways -> mgmt-domain-tier0-gateway -> BGP

  • edgeNodeSpecs:
    edge01-mgmt managementCidr
    edge01-mgmt edgeVtep1Cidr
    edge01-mgmt edgeVtep2Cidr
    edge01-mgmt uplink-edge1-tor1 interfaceCidr
    edge01-mgmt uplink-edge1-tor2 interfaceCidr
    edge02-mgmt managementCidr
    edge02-mgmt edgeVtep1Cidr
    edge02-mgmt edgeVtep2Cidr
    edge02-mgmt uplink-edge1-tor1 interfaceCidr
    edge02-mgmt uplink-edge1-tor2 interfaceCidr
    We can see edge management IP and VTEP IPs at System -> Fabric -> Nodes -> Edge Transport Nodes.
    The ToR IPs are visible at Networking -> Tier-0 Gateway -> GW -> Interfaces
    bgpNeighbours.neighbourIp (AS 65001)
    bgpNeighbours.neighbourIp (AS 65001)
    BGP neighbors are visible at Networking -> Tier-0 Gateway -> GW -> BGP -> BGP Neighbours
    logicalSegments Mgmt-RegionA01-VXLAN
    logicalSegments Mgmt-xRegionA01-VXLAN

  • hostSpecs:
    esxi-1 ipAddressPrivate.ipAddress
    esxi-1 ipAddressPrivate.ipAddress
    esxi-1 ipAddressPrivate.ipAddress
    esxi-1 ipAddressPrivate.ipAddress


VLC Download link
Blog on VLC part 1
Blog on VLC part 2
Google form to get link for downloading VLC
Direct VLC 4.2 download link
VLC slack community

Home > VMWare platform > vCloud Foundation > Setup automated lab environment for vCloud Foundation using VLC > vCloud foundation fully automated deployment with bringup using vlc