ACI-NSX

NSX connection with Cisco ACI.

In the last tutorial I have covered how i have created L3_OUT interface in ACI. In this blog post i will be covering NSX edge, NSX controller, vTEP configuration on Cisco ACI.  Here is the topology diagram :

NSX_ACI

NSX Setup is same as i have explained in my previous blog post. Only difference here is how L3_Out is connected to Edge & Controller/vTEP/ESX Servers are connected to ACI Leaf.

  1. I have created the tenant called as vNetworkcloud-virtualLab from APIC controller tenant section.
  2. In Application Profile Section, I have created bunch of EPGs like EPG-Controller, EPG-ESXServer and EPG-vTEP.  For each EPG i have configured respective vlan number & Leaf ports.

EPGs

3. While creating each EPG, we can create VRF & Bridge domain interface from APIC UI.

4. As shown in above diagram I have created 3 bridge domains with respective subnet.

  • BD-Controller : 192.168.30.1
  • BD-ESXServer : 192.168.20.1
  • BD-VTEP : 192.168.40.1

5. Three bridge domains are pointing to vrf called as vnetworkcloud-VRF.

6. I have configured OSPF between ACI’s L3_Out Interface & NSX Edge service gateway and also OSPF routing configured between NSX ESG and DLR.

This is the way i have configured NSX with Cisco ACI.

Advertisements
ACI-NSX

Configuration of L3_out in Cisco ACI

In my previous blog, I have explained basics of ACI terminology ‘s. In this blog post i will be covering how to enable Layer-3 interface on ACI. Its three high level steps to enable Layer 3 routing in ACI :

  • Enable MP-BGP inside Fabric & Choose Spine PRs “One time global configuration”
  • Define Fabric Access Policy for Border Leaf’s “L3_Out”.
  • Define Tenant Network Policy for Border Leaf “L3_Out”.

Enable MP-BGP inside Fabric & Choose Spine PRs.

  1. Login to APIC Controller GUI.
  2. Fabric -> Fabric Policies -> Pod Policies->Policies->BGP Route Reflector default
    1. Enter Autonomous System Number as mentioned below. In my case i have configured AS=1.
    2. In Route Reflector Nodes section add only spine switches. As i have two spine switches hence I have added nodes 101 & 102.

RouteReflector1.

             3. Next step is to create policy called as BGP_ON. Set BGP Route Reflector policy as default “Fabric -> Fabric Policies -> Pod Policies->Policy Groups”

RouteReflector2.

            4. Apply the BGP_ON POD policy under profile section

RouteReflector3

Above all are the steps to enable MP-BGP inside Fabric & Choose Spine PRs.

Define Fabric Access Policy for Border Leaf’s “L3_Out”.

Click on Fabric -> Access Policies -> Configuring Interface, PC and VPC -> mention complete detail like interfaces, switch name and select attached device type as “External Routed Devices” and respective physical domain name. Then click save & submit to create Fabric access policy for Leaf

AccessPolicy2

Define Tenant Network Policy for Border Leaf “L3_Out”.

As the name suggest tenant network policy, Click on Tenant -> vNetworkCloud-VirtualLab Tenant ->Networking-> Right click on External Routed Networks -> Create Routed Outside Network as shown below :

1.png

Create routed outside provides multiple option to configure like routing protocols, area-ID, area-type, vrf, and external routed domain. I have configured OSPF with below mentioned parameters.

2

Click on + sign from Nodes and Interfaces Protocol profiles section “Referring above screen-shot” then you will see below mentioned screen. I have configured L3_Out on Leaf_112 hence named node profile as 112_profile.

3

Click on + sign from Nodes section and provide router-ID click on OK.

4

Once you clicked on OK you can see the screen router-ID added under nodes section.

5

Click on + sign from OSPF Interface Profile section and configure OSPF. Name the interface profile and click next

6

I have made no change in OSPF_Profile & HSRP configuration hence clicked Next

7

Click on + sign after selecting SVI tab

8

Select interface to configured as L3, vlan no, interface ip , MTU settings then click ok..

9

Last section is to configure external EPG. that means what all network can configure this L3_Out. i have configured any,

Click on finish to end the configuration of External Routed outside.

In this blog post, i have configured high level steps to configure L3_Out in ACI. In the next blog post i will covering how i configured NSX with ACI.

  • Enable MP-BGP inside Fabric & Choose Spine PRs “One time global configuration”
  • Define Fabric Access Policy for Border Leaf’s “L3_Out”.
  • Define Tenant Network Policy for Border Leaf “L3_Out”.
ACI-NSX

NSX & ACI Configuration tutorial : Basics of Cisco ACI terminologies.

As I know bits and pieces of NSX. I tried my level best to understand Cisco ACI technology by comparing with VMware NSX. No matter what both are the best technologies in their own areas. VMware NSX mainly focused on virtual side & Cisco ACI focus on virtual & physical sides of the data-center.

In this tutorial series, i will be covering following topics :

  1. Basics of Cisco ACI terminologies.
  2. Configuration of stretched L2_Out in Cisco ACI.
  3. Configuration of L3_out in Cisco ACI.
  4. NSX communication with Cisco ACI.

Here is the list of recourse I have referred to learn ACI

  1. INE & Plural Cisco ACI training series.
  2. Blog series from Adam : https://adamraffe.com/learning-aci/

Let me starts with basics of Cisco ACI Architecture. In the lab I have created similar setup as shown below. 2 baby spines, 4 Leafs switches “2 Ethernet switches & 2 Fabric switches”.  In the Lab 2 servers are connected to Ethernet switches & APIC Controllers are connected to fabric switches.

APIC

Basic of ACI terminology :

End Point Group : is container of Physical servers, VMs or any Servers that have common policy requirement.

Here are the facts about EPGs

  • End Point can be anything which is connected to Leaf Switch
  • by default EPGs can’t communicate to each other.
  • Within EPGs servers can communicate.

Application Profile : Container of EPGs and contracts which required to communicate those EPGs.

Contracts and Filters : Contract is a collection one or more filters. Contract defines how the EPGs will communicate with each other.

Private Networks/VRF

  • In ACI, a private network – sometimes known as a context/VRF – is used to define a layer 3 forwarding domain within the fabric

Bridge Domain

Bridge domain is simply a layer 2 forwarding construct within the fabric, used to define a flood domain. You’re probably now thinking “that’s just like a VLAN” – and you’d be right, except that bridge domains are not subject to many of the same limitations as VLANs, such as the 4096 segment limit

Subnets:

When you define a subnet under a BD, you are creating an anycast gateway – that is, a gateway address for a subnet that potentially exists on every leaf node (if required). In the traditional world, think of this as an SVI interface that can exist on more than one node with the same address.

You’ll also notice that we can control the scope of the subnet – private, public or shared. These are used as follows:

  • Private to vrf: A private subnet is one that is not advertised to any external entity via a L3 outside and will be constrained to the fabric.
  • Advertised externally: A public subnet is flagged to be advertised to external entities via an L3 outside (e.g. via OSPF or BGP).
  • Shared between VRFs: If a subnet is flagged as shared, it will be eligible for advertisement to other tenants or contexts within the fabric. This is analogous to VRF route leaking in traditional networks.

subnet

Attachable Access End Points “AAEP” is the kind of bridge between tenant construct & Switch Profile/Switch Interface Profiles. Below mentioned diagram is very important to understand entire ACI architecture.

INE-AAEP-P1

In this blog post, i covered basics of ACI technology. In the next post i will configure L3_Out within ACI.

NSX PacketFlow

NSX Packet Flow 5 : VM-to-Physical host communication where as source-VM & edge on different ESX hosts

Assumption : Controller has populated ARP/MAC table on all the ESX hosts.

Web2 VM is running on ESXi05 with Web Logical Switch “VNI:5001” wants to communicate with outside student desktop “172.20.10.80” and edge is on different esx “ESXi04” . how this packet flow works?? Path marked in Green & Blue

InkedNSX Lab 7 ESG_LI2.jpg

  • Web1 VM “10.1.10.12” is running on ESX05. It creates an IP Packet with following IP/MAC Headers. The packet goes to default gateway of web-tier “DLR interface” because destination not in same subnet.                                            Source VM ip : 10.1.10.12 & Destination VM ip:172.20.10.80
  • Web-tier DLR interface receives the packet and see the destination ip address belongs to physical host hence it switches the packet to transit-network DLR Interface
  • Transit-network interface receives the packet but forward the packet to ESXi05 vTEP because edge is connected to different ESX host.
  • ESXi04 vTEP receives the packet and decapsulate it & forward it to transit-network switch. Transit switch pass the packet to internal interface of ESG.
  • Edge service gateway receives the packet via internal interface and lookup its own routing table then send packet via uplink interfae to physical router. Again physical router lookup’s its own routing table n forward the packet to student desktop.

          Lesson Learnt : Packet goes to vTEP when source-VM & edge is running on different ESX hosts.

 

NSX PacketFlow

NSX Packet Flow 4 : VM-to-Physical host communication where as source-VM & edge is on same ESX host

Assumption : Controller has populated ARP/MAC table on all the ESX hosts.

Web1 VM is running on ESXi04 with Web Logical Switch “VNI:5001” wants to communicate with outside student desktop “172.20.10.80” and edge is also runs on same esx host where web1 VM is running. how this packet flow works?? Path marked in Green

InkedNSX Lab 7 ESG_LI

  • Web1 VM “10.1.10.11” is running on ESX04. It creates an IP packet with following IP headers. The packet goes to default gateway of web-tier “DLR interface” because destination is not in same subnet.                                            Source VM ip : 10.1.10.11 & Destination VM ip:172.20.10.80
  • Web-tier DLR interface receives the packet and see the destination ip address belongs to physical host hence it switches the packet to transit-network DLR Interface
  • Transit-network interface forward the packet to edge service gateway.
  • Edge service gateway receives the packet via internal interface and lookup its own routing table then send packet via uplink interfae to physical router. Again physical router lookup’s its own routing table n forward the packet to student desktop.

          Lesson Learnt : Packet doesn’t go to vTEP when source-VM & edge is running on same ESX host.

 

NSX PacketFlow

NSX Packet Flow 3 : VM-to-VM communication within different logical switch on different ESX hosts

Assumption : Controller has populated ARP/MAC table on all the ESX hosts.

DB VM is running on ESXi04 with Web Logical Switch “VNI:5003” wants to communicate with Web VM. which is running on different esx host “ESXi05” with Web Logical switch “VNI:5001”, How this packet flow works?? Path marked in Green

Topology

InkedNSX2 Lab 6 DLR_LI

  1. DB VM “10.1.13.11” is running on ESX04. It creates an IP packet with following IP/MAC Headers. As packet goes to default gateway of DB-tier “DLR interface” because destination is not in same subnet hence DLR “DB-tier interface” receives the packet.                           .                                                                                                 Source VM ip : 10.1.30.11 & Destination VM ip:10.1.10.12
  2. DB-tier DLR interface receives the packet and see the destination ip address as ip is in web-tier networks,  it switches the packet to web-tier DLR Interface.
  3. Web-tier Interface sees the destination ip address and destination is not running on same ESX hence it sends the packet to vTEP for encapsulation.
  4. ESX04’s vTEP sends the packet to ESX05 vTEP, ESX05’s vTEP decapsulate the packet and send packets to web-tier interface. Web-tier interface forwards the packet to Web-tier VM.

         Leasson Learnt : Source and destination are in different networks & different ESX Host.. Packet goes to vTEP for encapsulation & DLR gateways.