Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Alec Hothan I would prefer to see a bare metal server with dedicated NIC connected to the TOR(s) in order to be able to run SW traffic generator on this server. This cannot be achieved using VMs. The NIC should be in preference a NIC we are familiar to use and is known to work - e.g. Intel X710 family (XXV710 2x25G) - those are the NIC that are most used for SW traffic generators and we prefer to avoid troubleshooting untested HW profiles. We need 2 physical interfaces to the TOR(s). Please consider this as it is the necessary condition to run traffic gen tools like NFVbench on that RI. If you want to get fully automated testing on data plane we will also need to program the TOR in order to allow traffic to flow from the traffic gen to the compute nodes. We can discuss about the TOR side config offline with the installer team (this will depend also on hw the deployer configures the openstack dataplane)
    • Parker Berberian What kinds of CPU and RAM would be needed to run a traffic generator in software? I assume the specs listed below for the compute / controller machines would be ample.
    • Alec Hothan yes same as compute node would do
    • Xavier Grall imho, we should separate jump host and traffic generator function: jump host can run as VM, while traffic generator is located on dedicated server (or "half" server, ie a single numa socket/processor


For lab release 1, can we agree to a homogeneous set of hardware (i.e. there is no difference between B / C / N flavors in the compute nodes)?

...

  • Alec Hothan this is really a question for the installer team as it depends how the openstack networks are mapped on the TORs and whether they support HA. Most lab settings in OPNFV used single TOR which is the simplest to setup - however production platforms almost never deploy with single TOR because most customers want link redundancy which requires dual TOR along with LACP configuration on both sides of the wire (this is where the installer will have to program LACP on each compute node for example). Note that this applies to data plane, storage plane and control plane links which are also almost never deployed without link redundancy. This installation mode will have a drastic impact on port requirements on the servers as it basically doubles the amount of ports and wiring and make the installer somewhat more complex. In a nutshell the final wiring and how openstack networks are mapped for HA is one of the major differentiator between free open source installers and commercial installers. I'm not sure who is the team in charge of installing openstack for CNTT and I dont know what kind of installer is being planned? If the installer only supports single TOR and no HA, then that will make the HW setting a lot simpler. However this will also make CNTT RI less realistic - which might be ok if that was never the intent or goal, or not a priority.
    • Lincoln Lavoie : I understand the installer would need to support the HA / dual TOR, but from a lab's prospective, what is the requirement, as this is talking about real hardware available within the lab.  
  • Alec Hothan the config of the TOR(s) is also an important element in installation as it depends on the type of encaps used by Neutron (tenant VLAN, VxLAN are the 2 main options, outside of the simpler flat network) and whether multiple networks are sharing the same physical interfaces or not - again this will depend on the installer used for CNTT-RI. Most lab deployments use simple VLANs to partition networks while production networks use more sophisticated network designs
    • Lincoln Lavoie Again, is the expectation on the lab to support 1 of the encapsulation methods, all of them, etc?



Original Email to opnfv-tech-disscus (for history purposes)

...