Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

ask.openstack.org latest RDO questions

  • Packstack error since 7/30 related to python-oslo-vmware - Posted on 31 July 2015 | 1:28 am
  • Unable to use filesystem_store_datadirs in Glance Kilo - Posted on 30 July 2015 | 3:11 pm
  • RDO kilo install failing "Invalid parameter force_power_state_during_sync" - Posted on 29 July 2015 | 7:00 pm
  • ceph + openstack integration - Posted on 29 July 2015 | 9:08 am
  • Are the dependent virtualization drivers also packaged with rdo kilo rpm - Posted on 8 May 2015 | 10:13 am
  • Ingress issue from spawned instance to compute host
    • I've used packstack for a allinone install on a VM (VMware Fusion), grizzly, no quantum networking.

      My problem is that a spawned instance cannot communicate with the openstack compute API or any other tcp/http service on the single controller/compute node (I have no problem pinging the spawned instance or ssh'ing to it). This is necessary because I'm installing cloud foundry on top of openstack (http://docs.cloudfoundry.com/docs/running/deploying-cf/openstack/) and there is a step where a bosh (like puppet) instance is spawned and must coordinate the install of cloud foundry services across a number of additional spawned instances, which requires use of the Compute API (these services requires communication with controller/compute node as well).

      On the spawned instance a process attempts to send a http request to the compute API running on 192.168.1.150, but gets a EHOSTUNREACH no route to host error, error Error 100: Unable to connect to the OpenStack Compute API.

      My controller/compute node (192.168.1.150) is Fedora 18, is using a static ip and has named configured for DNS.

      I can log on to spawned instance and verify that:

      -- curl fails to controller/compute node:

      vcap@bm-76db0be2-7803-47da-9d8a-1848b5e024e0:~$ curl http://192.168.1.150:35357/v2.0
      curl: (7) couldn't connect to host

      nmap shows only ports 22 and 53 open on controller/compute host

      vcap@bm-76db0be2-7803-47da-9d8a-1848b5e024e0:~$ nmap -PN 192.168.1.150

      Starting Nmap 5.00 ( http://nmap.org ) at 2013-09-21 08:42 UTC
      Interesting ports on 192.168.1.150:
      Not shown: 998 filtered ports
      PORT STATE SERVICE
      22/tcp open ssh
      53/tcp open domain

      Nmap done: 1 IP address (1 host up) scanned in 5.33 seconds

      -- the spawned instance can ping 192.168.1.150 fine.

      -- the spawned instance can successfully connect to http services running externally (google.com or another machine on my network); i.e. curl google.com and curl 192.168.1.3:9080 gets a response for example.

      I was able to isolate that iptables on 192.168.1.150 is blocking traffic and if I add a rule to allow all tcp traffic to be accepted by 192.168.1.150 I can temporarily get things to work (nmap works, curl works, my install proceeds a bit further). Here's the command I run on controller/compute node (host0/192.168.1.150):

      [root@host0 cf-release(keystone_admin)]# iptables -A nova-network-INPUT -i br100 -p tcp -m tcp -j ACCEPT
      [root@host0 cf-release(keystone_admin)]# iptables --list-rules | grep nova-network-INPUT -A nova-network-INPUT -i br100 -p udp -m udp --dport 67 -j ACCEPT
      -A nova-network-INPUT -i br100 -p tcp -m tcp --dport 67 -j ACCEPT
      -A nova-network-INPUT -i br100 -p udp -m udp --dport 53 -j ACCEPT
      -A nova-network-INPUT -i br100 -p tcp -m tcp --dport 53 -j ACCEPT
      -A nova-network-INPUT -i br100 -p tcp -m tcp -j ACCEPT

      and resulting success using curl on spawned instance: vcap@bm-76db0be2-7803-47da-9d8a-1848b5e024e0:/var/vcap/store/director/tasks/1$ curl http://192.168.1.150:35357/v2.0
      {"version": {"status": "stable", "updated": "2013-03-06T00:00:00Z",...}]}}

      The problem is that that rule will disappear when an instance is being spawned and I believe when nova networking is restarted, so I keep running into this issue during the install process and need a permanent fix (iptables save doesn't make a difference). I'm basically setting up a POC environment at home, so would accept non-secure fixes.

      Note that the nova security groups and rules don't seem to have any impact on this, as I'm not seeing any ingress specific rules (looks like that's not supported in nova networking) and have tried adding rules for specific ports (tcp/80 for example) and seen no impact.

      I've also tried the Noop firewall as you can see in nova.conf below, and tried net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 0.

      Any guidance would be much appreciated! In general, I can see that spawned instances shouldn't be allowed to communicate with compute/controller node(s) by default, but it's not clear to me why it's not easier to configure access if desired. Maybe I'm missing something?

      here's my config: http://paste.openstack.org/show/47722/