Pages

Tuesday, October 21, 2014

Openstack Juno Part 5 - Neutron configuring Network Node

Installing the Packages

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ipset  -y

Configuring  the Service
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest


openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


#verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
#Comment out any lines in the [service_providers] section.

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with #troubleshooting.


openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf

echo "dhcp-option-force=26,1454" >> /etc/neutron/dnsmasq-neutron.conf
chown neutron:neutron /etc/neutron/dnsmasq-neutron.conf

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password mar4neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret mar4meta

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with #troubleshooting.

#Perform the next two steps on the controller node.
#On the controller node, configure Compute to use the metadata service:
#Replace METADATA_SECRET with the secret you chose for the metadata proxy.

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret mar4meta

On the controller node, restart the Compute API service:
systemctl restart openstack-nova-api.service

# To configure the Modular Layer 2 (ML2) plug-in

 # Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network #interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface #on the network node.
#Dedicated IP for tunneling in network node
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.212
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs bridge_mappings external:br-ex


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service

#Add the external bridge:
ovs-vsctl add-br br-ex
#Add a port to the external bridge that connects to the physical external network interface:
#Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
ovs-vsctl add-port br-ex eth1


#Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve #suitable throughput between your instances and the external network.
#To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K INTERFACE_NAME gro off



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the service's 

systemctl enable neutron-openvswitch-agent.service
systemctl enable neutron-l3-agent.service
systemctl enable neutron-dhcp-agent.service
systemctl enable neutron-metadata-agent.service
systemctl enable neutron-ovs-cleanup.service
systemctl start neutron-openvswitch-agent.service
systemctl start neutron-l3-agent.service
systemctl start neutron-dhcp-agent.service
systemctl start neutron-metadata-agent.service

Monday, October 20, 2014

Openstack Juno + Docker error "Docker daemon is not running or is not reachable"

I was getting following error while integrating docker with Openstack Juno.

"2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)"

I tried changing the permission of the docker.sock but that didn't help. But when I upgraded the docker to 1.2 version the issue was fixed . The docker version which comes with centos is little bit old we can the rpm of new docker for centos7 from 

Download the following RPMS 

wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm

Install the RPM

in the same dorectory
yum install docker-1.2.0
yum install docker*


Error
====
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup
2014-10-20 14:24:22.876 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.901 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.906 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.919 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.954 2995 ERROR nova.openstack.common.threadgroup [-] Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup

Openstack Juno Part 4 neutron - Controller.

Create the Mysql Database

  create database neutron;
 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'10.0.0.211' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'10.0.0.212' IDENTIFIED BY 'mar4neutron';
flush privileges;

Create keystone Endpoints and user's
source /root/admin-openrc.sh
keystone user-create --name neutron --pass mar4neutron
keystone user-role-add --user neutron --tenant service --role admin
keystone service-create --name neutron --type network --description "OpenStack Networking"
keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696 --region regionOne

Installing the packages 
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which -y

Configuring the Packages
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:mar4neutron@controller/neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url http://controller:35357/v2.0
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password mar4nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_region_name regionOne

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
openstack-config --set /etc/nova/nova.conf neutron admin_password mar4neutron
openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

Populating the database
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron

Starting the Services.
systemctl restart openstack-nova-api.service
systemctl restart openstack-nova-scheduler.service
systemctl restart openstack-nova-conductor.service
systemctl enable neutron-server.service
systemctl start neutron-server.service

Checking the database
MariaDB [neutron]> show tables;
+-------------------------------------+
| Tables_in_neutron                   |
+-------------------------------------+
| agents                              |
| alembic_version                     |
| allowedaddresspairs                 |
| arista_provisioned_nets             |
| arista_provisioned_tenants          |
| arista_provisioned_vms              |
| brocadenetworks                     |
| brocadeports                        |
| cisco_credentials                   |
| cisco_csr_identifier_map            |
| cisco_hosting_devices               |
| cisco_ml2_apic_contracts            |
| cisco_ml2_apic_host_links           |
| cisco_ml2_apic_names                |
| cisco_ml2_nexusport_bindings        |
| cisco_n1kv_multi_segments           |
| cisco_n1kv_network_bindings         |
| cisco_n1kv_port_bindings            |
| cisco_n1kv_profile_bindings         |
| cisco_n1kv_trunk_segments           |
| cisco_n1kv_vlan_allocations         |
| cisco_n1kv_vmnetworks               |
| cisco_n1kv_vxlan_allocations        |
| cisco_network_profiles              |
| cisco_policy_profiles               |
| cisco_port_mappings                 |
| cisco_provider_networks             |
| cisco_qos_policies                  |
| cisco_router_mappings               |
| consistencyhashes                   |
| csnat_l3_agent_bindings             |
| dnsnameservers                      |
| dvr_host_macs                       |
| embrane_pool_port                   |
| externalnetworks                    |
| extradhcpopts                       |
| firewall_policies                   |
| firewall_rules                      |
| firewalls                           |
| floatingips                         |
| ha_router_agent_port_bindings       |
| ha_router_networks                  |
| ha_router_vrid_allocations          |
| healthmonitors                      |
| hyperv_network_bindings             |
| hyperv_vlan_allocations             |
| ikepolicies                         |
| ipallocationpools                   |
| ipallocations                       |
| ipavailabilityranges                |
| ipsec_site_connections              |
| ipsecpeercidrs                      |
| ipsecpolicies                       |
| lsn                                 |
| lsn_port                            |
| maclearningstates                   |
| members                             |
| meteringlabelrules                  |
| meteringlabels                      |
| ml2_brocadenetworks                 |
| ml2_brocadeports                    |
| ml2_dvr_port_bindings               |
| ml2_flat_allocations                |
| ml2_gre_allocations                 |
| ml2_gre_endpoints                   |
| ml2_network_segments                |
| ml2_port_bindings                   |
| ml2_vlan_allocations                |
| ml2_vxlan_allocations               |
| ml2_vxlan_endpoints                 |
| mlnx_network_bindings               |
| multi_provider_networks             |
| network_bindings                    |
| network_states                      |
| networkconnections                  |
| networkdhcpagentbindings            |
| networkflavors                      |
| networkgatewaydevicereferences      |
| networkgatewaydevices               |
| networkgateways                     |
| networkqueuemappings                |
| networks                            |
| networksecuritybindings             |
| neutron_nsx_network_mappings        |
| neutron_nsx_port_mappings           |
| neutron_nsx_router_mappings         |
| neutron_nsx_security_group_mappings |
| nexthops                            |
| nuage_net_partition_router_mapping  |
| nuage_net_partitions                |
| nuage_provider_net_bindings         |
| nuage_subnet_l2dom_mapping          |
| ofcfiltermappings                   |
| ofcnetworkmappings                  |
| ofcportmappings                     |
| ofcroutermappings                   |
| ofctenantmappings                   |
| ovs_network_bindings                |
| ovs_tunnel_allocations              |
| ovs_tunnel_endpoints                |
| ovs_vlan_allocations                |
| packetfilters                       |
| poolloadbalanceragentbindings       |
| poolmonitorassociations             |
| pools                               |
| poolstatisticss                     |
| port_profile                        |
| portbindingports                    |
| portinfos                           |
| portqueuemappings                   |
| ports                               |
| portsecuritybindings                |
| providerresourceassociations        |
| qosqueues                           |
| quotas                              |
| router_extra_attributes             |
| routerflavors                       |
| routerl3agentbindings               |
| routerports                         |
| routerproviders                     |
| routerroutes                        |
| routerrules                         |
| routers                             |
| routerservicetypebindings           |
| securitygroupportbindings           |
| securitygrouprules                  |
| securitygroups                      |
| segmentation_id_allocation          |
| servicerouterbindings               |
| sessionpersistences                 |
| subnetroutes                        |
| subnets                             |
| tunnelkeylasts                      |
| tunnelkeys                          |
| tz_network_bindings                 |
| vcns_edge_monitor_bindings          |
| vcns_edge_pool_bindings             |
| vcns_edge_vip_bindings              |
| vcns_firewall_rule_bindings         |
| vcns_router_bindings                |
| vips                                |
| vpnservices                         |
+-------------------------------------+
142 rows in set (0.00 sec)

Sunday, October 19, 2014

Failed to issue method call: Unit iptables.service failed to load In Centos7

In RHEL 7 / CentOS 7, firewalld was introduced to manage iptables. IMHO, firewalld is more suited for workstations than for server environments.

It is possible to go back to a more classic iptables setup. First, stop and mask the firewalld service:

systemctl stop firewalld
systemctl mask firewalld
Then, install the iptables-services package:

yum install iptables-services
Enable the service at boot-time:

systemctl enable iptables
Managing the service

systemctl [stop|start|restart] iptables
Systemctl doesn't seem to manage the save action like you were able to do in the past with service:

/usr/libexec/iptables/iptables.init save

Friday, October 17, 2014

Logstash to parse Local files,apache/niginx Logs

Filters in logstach 
Filters are an in-line processing mechanism which provide the flexibility to slice and dice your data to fit your needs. Let’s see one in action, namely the grok filter.

input { stdin { } }

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
Run logstash with this configuration:

bin/logstash -f logstash-filter.conf
Now paste this line into the terminal (so it will be processed by the stdin input):

127.0.0.1 - - [11/Dec/2013:00:01:45 -0800] "GET /xampp/status.php HTTP/1.1" 200 3891 "http://cadenza/xampp/navi.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:25.0) Gecko/20100101 Firefox/25.0"


Run Logstash from Local File buy configuring input session. Below we parse a apache access log from local server. 

input {
  file {
    path => "/Users/kurt/logs/access_log"
    start_position => beginning
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => localhost
  }
  stdout { codec => rubydebug }
}

Logstach Configuration for parsing nginx Logs 

input {
  file {
    path => "/Users/kurt/logs/access_log"
    start_position => beginning
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => localhost
  }
  stdout { codec => rubydebug }
}

Log Monitoring WIth Kibana+Logstash+elasticsearch



Centralized logging using Logstash and elasticsearch  can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place.


Installing Java 

yum install java-1.7.0-openjdk-*

Install Elasticsearch

yum install https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.noarch.rpm

Elasticsearch is now installed. Let's edit the configuration:/etc/elasticsearch/elasticsearch.yml

Add the following line somewhere in the file, to disable dynamic scripts:

script.disable_dynamic: true

You will also want to restrict outside access to your Elasticsearch instance, so outsiders can't read your data or shutdown your Elasticseach cluster through the HTTP API. Find the line that specifies network.host and uncomment it so it looks like this:

network.host: localhost

Then disable multicast by finding the discovery.zen.ping.multicast.enabled item and uncommenting so it looks like this:

discovery.zen.ping.multicast.enabled: false


Now start Elasticsearch:

sudo service elasticsearch restart


Install Nginx

yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install nginx -y

Download the sample Nginx configuration from Kibana's github repository to your home directory:

cd ~; curl -OL https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf

Open the sample configuration file for editing:

vi nginx.conf

Find and change the values of the server_name to your FQDN (or localhost if you aren't using a domain name) and root to where we installed Kibana, so they look like the following entries:

server_name FQDN;
root  /usr/share/nginx/kibana3;

Save and exit. Now copy it over your Nginx default server block with the following command:

sudo cp ~/nginx.conf /etc/nginx/conf.d/default.conf


Installing Kibana to parse the logs
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.1.tar.gz
tar zxvf kibana-3.1.1.tar.gz


Open the Kibana configuration file kibana-3.1.1/config.js  and  find the line that specifies the elasticsearch server URL, and replace the port number (9200 by default) with 80:

   elasticsearch: "http://"+window.location.hostname+":80",

mv kibana-3.1.1 /usr/share/nginx/kibana3

start the Nginx

service nginx start

sudo yum install httpd-tools-2.2.15
Then generate a login that will be used in Kibana to save and share dashboards (substitute your own username):
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd user

Install Logstash

yum install https://download.elasticsearch.org/logstash/logstash/packages/centos/logstash-1.4.2-1_2c0f5a1.noarch.rpm -y

Creating Certificates

cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt


cat << EOF >> /etc/logstash/conf.d/01-lumberjack-input.conf
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
EOF

cat << EOF >> /etc/logstash/conf.d/10-syslog.conf
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
EOF


cat << EOF >> /etc/logstash/conf.d/30-lumberjack-output.conf
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
EOF




On Logstash Server

Copy the SSL certificate to Server (substitute with your own login):

scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp


Install Logstash Forwarder Package

yum install -y http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm

Next, you will want to install the Logstash Forwarder init script, so it starts on bootup. We will use the init script provided by logstashbook.com:

cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
sudo chmod +x logstash-forwarder

The init script depends on a file called /etc/sysconfig/logstash-forwarder. A sample file is available to download:

sudo curl -o /etc/sysconfig/logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

sudo vi /etc/sysconfig/logstash-forwarder
And modify the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Save and quit.

Now copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):

sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Configure Logstash Forwarder
On Server, create and edit Logstash Forwarder configuration file, which is in JSON format:

cat << EOF > /etc/logstash-forwarder
{
  "network": {
    "servers": [ "192.168.255.1:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages",
        "/var/log/secure"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}

EOF


Note that this is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.

Now we will want to add the Logstash Forwarder service with chkconfig:

sudo chkconfig --add logstash-forwarder

Now start Logstash Forwarder to put our changes into place:

sudo service logstash-forwarder start


Now checkout the kibana server IP to get the dashboard

Thursday, October 16, 2014

Poodle-SSLv3 Vulnerability

A vulnerability in SSLv3 encryption protocol was disclosed. This vulnerability, known as  POODLE (Padding Oracle On Downgraded Legacy Encryption), allows an attacker to read information encrypted with this version of the protocol in plain text using a man-in-the-middle attack.

Although SSLv3 is an older version of the protocol which is mainly obsolete, many pieces of software still fall back on SSLv3 if better encryption options are not available. More importantly, it is possible for an attacker to force SSLv3 connections if it is an available alternative for both participants attempting a connection

How to test for SSL POODLE vulnerability?
$ openssl s_client -connect google.com:443 -ssl3
If there is a handshake failure then the server is not supporting SSLv3 and it is secure from this vulnerability. Otherwise it is required to disable SSLv3 support.


The POODLE vulnerability exists because the SSLv3 protocol does not adequately check the padding bytes that are sent with encrypted messages.

Since these cannot be verified by the receiving party, an attacker can replace these and pass them on to the intended destination. When done in a specific way, the modified payload will potentially be accepted by the recipient without complaint.

The POODLE vulnerability does not represent an implementation problem and is an inherent issue with the entire protocol, there is no workaround and the only reliable solution is to not use it.

In nginx configuration, just after the "ssl on;" line, add the following to allow only TLS protocols:

ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

Apache Web Server

Inside /etc/httpd/conf.d/ssl.conf or httpd.conf you can find the SSLProtocol directive. If this is not available, create it. Modify this to explicitly remove support for SSLv3:

SSLProtocol all -SSLv3 -SSLv2


Ha-Proxy
To disable SSLv3 in an HAProxy load balancer, you will need to open the haproxy.cfg file.

sudo nano /etc/haproxy/haproxy.cfg
frontend name
    bind public_ip:443 ssl crt /path/to/certs no-sslv3

Postfix

In Postfix conf /etc/postfix/main.cf add.
smtpd_tls_mandatory_protocols=!SSLv2, !SSLv3


In Dovecot

sudo nano /etc/dovecot/conf.d/10-ssl.conf
ssl_protocols = !SSLv3 !SSLv2

Tomcat

Edit @ $TOMCAT_HOME/conf/server.xml.

Tomcat 5 and 6:

    <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               clientAuth="false" sslEnabledProtocols = "TLSv1,TLSv1.1,TLSv1.2" />
Tomcat >= 7

    <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               clientAuth="false" sslProtocols = "TLSv1,TLSv1.1,TLSv1.2" />