Get detailed debugging info Elasticsearch node

curl -o – “http://localhost:9200/_nodes/process?pretty”

 

{
“cluster_name” : “prod-escluster”,
“nodes” : {
“NUyPv6zIQDS3wo_u_8FUaw” : {
“name” : “es01”,
“transport_address” : “inet[/192.168.1.147:9300]”,
“host” : “es-01.streamuk.com”,
“ip” : “127.0.0.1”,
“version” : “1.4.2”,
“build” : “927caff”,
“http_address” : “inet[/192.168.1.147:9200]”,
“attributes” : {
“master” : “true”
},
“process” : {
“refresh_interval_in_millis” : 1000,
“id” : 2185,
“max_file_descriptors” : 500000,
“mlockall” : false
}
}
}
}

Backup and Restore Elastic search

While elastic search is usually run as a cluster, for the sake of this tutorial I am showing the _snapshot and _restore tools.

 

mkdir  /mnt/backups/my_backup
chmod 777 -R /mnt/backups/

Must available on all nodes.


 curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
 "type": "fs",
 "settings": {
        "location": "/mnt/backups/my_backup",
   "compress": true
    }
}'






[root@centos-base mnt]# curl -XGET 'http://localhost:9200/_snapshot/my_backup?pretty'

{
  "my_backup" : {
    "type" : "fs",
    "settings" : {
      "compress" : "true",
      "location" : "/mnt/backups/my_backup"
    }
  }
}



 curl -XGET 'http://localhost:9200/_snapshot?pretty'                                       {
  "my_backup" : {
    "type" : "fs",
    "settings" : {
      "compress" : "true",
      "location" : "/mnt/backups/my_backup"
    }
  }
}

_____________________________________________________________________________________________

changing


 curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
 "type": "fs",
 "settings": {
  "location": "/mnt/backups/my_backup",
 "compress": true,
 "verify":true
	}	
	}'


 curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_`date | tr -d " " | tr -d ":" | tr '[:upper:]' '[:lower:]' `?wait_for_completion=true&pretty"
 
 

 
_____________________________________________________________________________________________

restoring

 mkdir -p /mnt/backups/my_backup
 chmod -R 777 /mnt/backups/

 
 Create repository
 -----------------------
 
 curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
 "type": "fs",
 "settings": {
  "location": "/mnt/backups/my_backup",
 "compress": true,
 "verify":true
	}	 
	}'

	
 restore from file system
 --------------------------------
 
  curl -XPOST "localhost:9200/_snapshot/my_backup/snapshot_satapr25223454cest2015/_restore"

nJoy 😉

Sending Elasticsearch to a syslog server

yum install rsyslog -y

Add the following to rsyslog.conf on the client system

############

$ModLoad imfile
$InputFileName /var/log/elasticsearch/elasticsearch.log
$InputFileTag elasticsearch
$InputFileStateFile stat-elasticsearch
$InputFileSeverity Info
$InputFileFacility daemon
$InputRunFileMonitor
#local3.* hostname:<portnumber>

daemon.* @192.168.1.66:514


############

 

Also if you want all logs to go through to syslog server:

 

*.* @192.168.1.66

 

at the end of the file.

Issue a :

service rsyslog restart

and watch the logs flow in.

 

nJoy 😉

 

 

 

 

Installing sample data in elastic search

After installing elastic search it is useful for testing and training to load some sample data.

1) create mapping :

curl -XPUT http://localhost:9200/shakespeare -d '
{
 "mappings" : {
  "_default_" : {
   "properties" : {
    "speaker" : {"type": "string", "index" : "not_analyzed" },
    "play_name" : {"type": "string", "index" : "not_analyzed" },
    "line_id" : { "type" : "integer" },
    "speech_number" : { "type" : "integer" }
   }
  }
 }
}
';

2) Load the data using the bulk api:

wget "https://github.com/ropensci/elastic_data/blob/master/data/shakespeare_data.json?raw=true" -O  shakespeare.json

curl -XPUT localhost:9200/_bulk --data-binary @shakespeare.json

 

nJoy 😉