Pages

Sunday, October 19, 2014

Failed to issue method call: Unit iptables.service failed to load In Centos7

In RHEL 7 / CentOS 7, firewalld was introduced to manage iptables. IMHO, firewalld is more suited for workstations than for server environments.

It is possible to go back to a more classic iptables setup. First, stop and mask the firewalld service:

systemctl stop firewalld
systemctl mask firewalld
Then, install the iptables-services package:

yum install iptables-services
Enable the service at boot-time:

systemctl enable iptables
Managing the service

systemctl [stop|start|restart] iptables
Systemctl doesn't seem to manage the save action like you were able to do in the past with service:

/usr/libexec/iptables/iptables.init save

Friday, October 17, 2014

Logstash to parse Local files,apache/niginx Logs

Filters in logstach 
Filters are an in-line processing mechanism which provide the flexibility to slice and dice your data to fit your needs. Let’s see one in action, namely the grok filter.

input { stdin { } }

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
Run logstash with this configuration:

bin/logstash -f logstash-filter.conf
Now paste this line into the terminal (so it will be processed by the stdin input):

127.0.0.1 - - [11/Dec/2013:00:01:45 -0800] "GET /xampp/status.php HTTP/1.1" 200 3891 "http://cadenza/xampp/navi.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:25.0) Gecko/20100101 Firefox/25.0"


Run Logstash from Local File buy configuring input session. Below we parse a apache access log from local server. 

input {
  file {
    path => "/Users/kurt/logs/access_log"
    start_position => beginning
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => localhost
  }
  stdout { codec => rubydebug }
}

Logstach Configuration for parsing nginx Logs 

input {
  file {
    path => "/Users/kurt/logs/access_log"
    start_position => beginning
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => localhost
  }
  stdout { codec => rubydebug }
}

Log Monitoring WIth Kibana+Logstash+elasticsearch



Centralized logging using Logstash and elasticsearch  can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place.


Installing Java 

yum install java-1.7.0-openjdk-*

Install Elasticsearch

yum install https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.noarch.rpm

Elasticsearch is now installed. Let's edit the configuration:/etc/elasticsearch/elasticsearch.yml

Add the following line somewhere in the file, to disable dynamic scripts:

script.disable_dynamic: true

You will also want to restrict outside access to your Elasticsearch instance, so outsiders can't read your data or shutdown your Elasticseach cluster through the HTTP API. Find the line that specifies network.host and uncomment it so it looks like this:

network.host: localhost

Then disable multicast by finding the discovery.zen.ping.multicast.enabled item and uncommenting so it looks like this:

discovery.zen.ping.multicast.enabled: false


Now start Elasticsearch:

sudo service elasticsearch restart


Install Nginx

yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install nginx -y

Download the sample Nginx configuration from Kibana's github repository to your home directory:

cd ~; curl -OL https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf

Open the sample configuration file for editing:

vi nginx.conf

Find and change the values of the server_name to your FQDN (or localhost if you aren't using a domain name) and root to where we installed Kibana, so they look like the following entries:

server_name FQDN;
root  /usr/share/nginx/kibana3;

Save and exit. Now copy it over your Nginx default server block with the following command:

sudo cp ~/nginx.conf /etc/nginx/conf.d/default.conf


Installing Kibana to parse the logs
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.1.tar.gz
tar zxvf kibana-3.1.1.tar.gz


Open the Kibana configuration file kibana-3.1.1/config.js  and  find the line that specifies the elasticsearch server URL, and replace the port number (9200 by default) with 80:

   elasticsearch: "http://"+window.location.hostname+":80",

mv kibana-3.1.1 /usr/share/nginx/kibana3

start the Nginx

service nginx start

sudo yum install httpd-tools-2.2.15
Then generate a login that will be used in Kibana to save and share dashboards (substitute your own username):
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd user

Install Logstash

yum install https://download.elasticsearch.org/logstash/logstash/packages/centos/logstash-1.4.2-1_2c0f5a1.noarch.rpm -y

Creating Certificates

cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt


cat << EOF >> /etc/logstash/conf.d/01-lumberjack-input.conf
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
EOF

cat << EOF >> /etc/logstash/conf.d/10-syslog.conf
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
EOF


cat << EOF >> /etc/logstash/conf.d/30-lumberjack-output.conf
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
EOF




On Logstash Server

Copy the SSL certificate to Server (substitute with your own login):

scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp


Install Logstash Forwarder Package

yum install -y http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm

Next, you will want to install the Logstash Forwarder init script, so it starts on bootup. We will use the init script provided by logstashbook.com:

cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
sudo chmod +x logstash-forwarder

The init script depends on a file called /etc/sysconfig/logstash-forwarder. A sample file is available to download:

sudo curl -o /etc/sysconfig/logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

sudo vi /etc/sysconfig/logstash-forwarder
And modify the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Save and quit.

Now copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):

sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Configure Logstash Forwarder
On Server, create and edit Logstash Forwarder configuration file, which is in JSON format:

cat << EOF > /etc/logstash-forwarder
{
  "network": {
    "servers": [ "192.168.255.1:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages",
        "/var/log/secure"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}

EOF


Note that this is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.

Now we will want to add the Logstash Forwarder service with chkconfig:

sudo chkconfig --add logstash-forwarder

Now start Logstash Forwarder to put our changes into place:

sudo service logstash-forwarder start


Now checkout the kibana server IP to get the dashboard