Arno Foreman Install on a Single CentOS 7 laptop
Getting through this may come up with several issues and requires some workarounds. The default method of deployment is "bare metal" meaning you are going to provision out other servers. You need to use "-virtual" as a parameter to deploy.sh in order to provision VMs rather than 5 other bare metal servers. Please follow these steps:
* 250GB storage
- 18 GB RAM (10 GB for non-HA)
- 1 NIC configured with internet access
* If in China: ./deploy.sh -virtual -ping_site www.baidu.com -static_ip_range <your_range>
- Otherwise: ./deploy.sh -virtual -static_ip_range <your_range>
- For non-HA: ./deploy.sh -virtual -static_ip_range <your_range> -base_config <full path to pwd>/opnfv_ksgen_settings_no_HA.yml
Where <your_range> is a continous block of public IP addresses you can use, e.g.: 192.168.1.101,192.168.1.120
The IP range can be determined by just looking at the "ifconfig" output and starting at the next "x01" in the 192.168 range. For example, in an environment on Wi-Fi at home, this range ended up being 192.168.1.201,192.168.1.220.
A successful deployment should complete with the following line:
APEX-19: Verify at start that default Gateway is in the same subnet as the given IP range
It is noticed that if you pick an IP range whose subnet is not in the same subnet as the default gateway, the created VMs inexplicably cannot access the internet. It would be a good thing to verify at the beginning of the script and report an error if this is discovered.
APEX-11 (Fixed on 09/04/205): Some interfaces ignored by deploy.sh due to regexp bug
If you see the following in the console:
The fact that you hit this prompt at all was a bug. It normally can automatically find the correct interface (it should have found "wl01"), but the script skips any interfaces with "lo" in the name, attempting to skip "loopback" interfaces. The workaround is to change the regexp to look for "||lo" instead of "lo".
The entire change was changing line 945 from this:
This bug was filed as https://jira.opnfv.org/browse/APEX-11. One concern is that assuming semantics of an interface based on its name is suspect at best.
Please note that APEX-11 has been fixed since September 4, 2015 for both VM and bare metal deployment.
Do Some Clean-up before Restarting
At this point, since the deployment is half done, you may have to do some cleanup before restarting. You can run the "clean.sh" script. Or if it doesn't work, you may have to go to "/var/opt/opnfv" and go to each of the VMs defined in that folder (an older version of this script created the VMs in /tmp, which made rebooting the box awkward) and do "vagrant destroy" and then verify no VMs were running afterwards, with "vboxmanage list runningvms", and then delete the entire "opnfv" directory.
Handling Firefox if the Attempt to Reach the Foreman URL Fails
The first attempt to reach the Foreman URL from Firefox will likely fail, unless Firefox wasn't running when you ran the deployment. If it was running, just exit it and restart. If this is isn't your first complete run through the deployment, this might still fail because of an invalid certificate, which is installed as part of the deployment. Fix this by going into your Firefox preferences and deleting the cert with "opnfv" in the URL.
Internet Unreachable from VM / APEX-2 (Fixed): Default Vagrant Route Exists in Virtual Setups Post Deployment
Another issue that you may run into is that the internet is initially unreachable from any of these VMs. This is because they somehow get two default routes, one of which is invalid. This is evident if you see something like this:
The fix is to remove the "10.0.2.2" gateway with:
This is logged at https://jira.opnfv.org/browse/APEX-2, which should have been fixed.
The Given Horizon URL is Invalid
The given Horizon URL is invalid. This is a "private" IP, and should be a public IP. This is logged at https://jira.opnfv.org/browse/APEX-12.
The workaround is to ssh to the controller ("vagrant ssh") and run "ifconfig" and find an interface with an IPv4 address beginning with "192.168.1.x". In my case, it was 192.168.1.202.
Issues in OpenStack Verification
At this point, you can start the "OpenStack Verification" section of the install guide. You should be able to create the volume, image, and launch a few instances before you hit any issues.
No Instance Displays on the Console
One issue may be that When you try to view the console of either instance, nothing ever display.
The workaround for this is a little gnarly. In order to login to these instances from the shell, you first have to "vagrant ssh" to the controller, then source the "keystonerc_admin" file to set some auth-oriented environment variables, then do "nova list", just to remind yourself of the IP addresses of the instances (which are 10.0.0.5 and 10.0.0.7 for example), then do "ip netns list", which returns a big uuid value representing the dhcp network namespace. You'll use that uuid in a couple other commands.
Still in the controller, you do the following (where $uuid is the uuid value from "ip netns list"):
Then create another shell from the main box, ssh to the controller, and do this:
You could now do "ping 10.0.0.7" from the first one and "ping 10.0.0.5" from the second one. This represents the end of the "OpenStack Verification" section.
You may want to go one step further and verify that the VM at 10.0.0.5 was really the one being pinged from the 10.0.0.7 VM. You can normally do this by running "tcpdump -i any icmp" on the VM you are trying to ping, and when you ping from the other VM, you can see debug output that shows it is pinging that VM.
Alternatively, you could also run tcpdump from the compute node, specifying the tap interface corresponding to each IP. This requires ssh'ing to the compute node and doing "ovs-vsctl show". This shows the two tap interfaces, but it doesn't show which IP corresponds to each of those tap interfaces. Then you can do an "ifconfig" on the instance and find out the port id.
Then, still on the compute node, do "tcpdump -i <interfacename> icmp" for the first tap interface. Create another shell, ssh to the compute node, and do the same for the other tap interface name. Now, go back to the shell where you're logged into each instance, and do the "ping" to the other instance. You should now see output from one of the tcpdump calls, showing the pings traveling on that interface.
As the VMs are now not in /tmp, it should survive a reboot, but you might have to do "vagrant up" in the controller and compute VMs, and then perhaps do something similar in the Horizon GUI for the two instances. If you bring this up in a different network, you might have trouble with the available IP range. This could be mitigated by first creating a VM for the jumphost and starting the deployment process over.