DevOps Zone is brought to you in partnership with:

Kris Buytaert is a long time Linux and Open Source Consultant. He's one of instigators of the devops movement, currently working for Inuits He is frequently speaking at, or organizing different international conferences and has written about the same subjects in different Books, Papers and Articles He spends most of his time working on bridging the gap between developers and operations with a strong focus on High Availability, Scalability , Virtualisation and Large Infrastructure Management projects hence trying to build infrastructures that can survive the 10th floor test, better known today as the cloud while actively promoting the devops idea ! Kris is a DZone MVB and is not an employee of DZone and has posted 27 posts at DZone. You can read more from them at their website. View Full User Profile

Logstash and ElasticSearch

02.27.2012
| 8357 views |
  • submit to reddit

"An expert is a man who has made all the mistakes which can be made, in a narrow field."

Niels Bohr

When I setup Logstash for the very first time I got bitten by an empty search, aparently no logs were indexed. Reading the log files indeed told me about it:
    WARN: org.elasticsearch.discovery.zen.ping.unicast: [Blaire, Allison] failed to send ping to [[#zen_unicast_1#][inet[/127.0.0.1:9300]]]
    INFO | jvm 1 | 2012/02/06 22:45:55 | org.elasticsearch.transport.RemoteTransportException: [Page, Karen][inet[/127 .0.0.1:9300]][discovery/zen/unicast]
    INFO | jvm 1 | 2012/02/06 22:45:55 | Caused by: java.io.EOFException

The above is the typical error when the ElasticSearch version you are using externally is not in sync with the one Logstash is using, yes those versions need to match.

Fast forward a couple of weeks.. and I'm upgrading Logstash and therefore also ElasticSearch. I have a Vagrant setup to play with so all of the components are running on 1 node.

I kept running into a similar problem, this time however I saw log entries being indexed, I could get data from my ElasticSearch setup using
wget -q -S -O - http://localhost:9200/_status?pretty=true

But the web interface kept showing no results ;(

While nagging about it on irc .. Jordan gave me the insight :

2012-01-31.194347+0100CET.txt:(07:55:36 PM) whack: slight caveat that elasticsearch clients also join the cluster, so if you point everyone at 127.0.0.1:9300, that :9300 could be one of your clients, not the server

Indeed when you by accident start any of the logstash instances (server/shipper/web) before you start your ElasticSearch instance you can be in trouble.
Ordering really matters , you really need to start ElasticSearch before you start the clients.

Obviously is you don't use the unicast setup you don't run into this problem ..

So what other mistakes should I make ?


Source:  http://www.krisbuytaert.be/blog/logstash-and-elasticsearch
Published at DZone with permission of Kris Buytaert, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Goel Yatendra replied on Thu, 2012/03/15 - 2:57pm

You can get around this by changing the transport.tcp.port setting in elasticsearch.yml. If you do this, you'll need to update the port in the logstash elasticsearch output config. You'll also need to update the port number in the elasticsearch backend URL parameter if you're using logstash-web.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.