Implementing Elastic Search with Kibana and Logstash
Implementing Elastic Search with Kibana and Logstash
ElasticSearch is a search engine based on the Lucene library. It provides a distributed,multitenant-capable full-text search engine with a HTTP web interface and schema-free JSON documents.It is used in advanced search mechanism.
Kibana is an open source analytics and visualisation platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices.
LogStash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favourite “stash” (like Elasticsearch).
Before We start with loading the data from MYSQL to elasticSearch, Dowanload ElasticSearch, Kibana and Logstash of same versions(recommended 6.2.4) from the web browser.
As Elasticsearch is an open source project built with Java and handles mostly other open source projects, documentations on importing data from SQL Server to ES using LogStash.
As Elasticsearch is an open source project built with Java and handles mostly other open source projects, documentations on importing data from SQL Server to ES using LogStash.
I'd like to share how to import SQL Server data to Elasticsearch (version 6.2) using LS and verify the result on Kibana.
Step 1 - Install Java SE Development Kit 8
Install the Java SE Development Kit 8.Elastic search documentation clearly mentions to install JDK version 8 above all the latest versions.
Download "JDK 8 8u162" and install it on the machine and make sure that "java" is in the PATH variable so that it can be called in any directory within a command line.
Step 2 - Install JDBC Driver for SQL Server
Download and install Microsoft JDBC Driver 4.2 for SQL Server, not the latest version.
As Elasticsearch is built with JDK 8, you can't use the latest version of JDBC Driver (version 6.2) for SQL Server as it does not support JDK 8.
Step 3 – Set CLASS PATH for the JDBC Driver
* Set the path so that Java can find the JDBC driver. Go to the directory under which you have installed SQL Server JDBC.
* Now you need to navigate to find a JAR file named sqljdbc42.jar, which is found under <<JDBC installation folder>>\sqljdbc_4.2\enu\jre8.
* And then copy the full path to the JAR file.
* A cool trick on Windows 7/8/10 is that, when shift + right click on a file, it gives you a "Copy as Path" option.
* Add a CLASSPATH environment variable with following values (if you don’t already have one).
“.” - for the current directory to search.
And the JAR file path copied previously.
Step 4 – Create an Elastic Search index to Import Data from kibana
Use CURL or Post-Man to create an Index but I will use Kibana console to create an index named "sample", which is equivalent to a database in relational database terminology.
Before we start the Kibana service, we need to start Elasticsearch so that Kibana would not whine about Elasticsearch not being present.
Kibana warnings on lines 12~21 due to Elasticsearch being unavailable
Go to the Elasticsearch installation and start the service.
C:\Users\Naveenr\Documents\Downloads\elasticsearch-6.2.4
> bin\elasticsearch.bat
And then go to the Kibana installation directory to start Kibana service.
C:\Users\Naveenr\Documents\Downloads\kibana-6.2.4-windows-x86_64
> bin\kibana.bat
If Kibana is started without an issue, you will see an output similar to the following.
Kibana started successfully
On line 9, Kibana reports that it is running on http://localhost:5601
Open the URL in a browser of your choice.Now go to "Dev Tools" link the bottom left of the page.
Click on Kibana Dev Tools Link
Once you see the Console, create a new index with the following command.
PUT sample
{
"settings" : {
"index" : {
"number_of_shards" : 3,
"number_of_replicas" : 0
}
}
}
Create a new Elasticsearch index named "sample"And you will see the response from Elasticsearch with index creation confirmation on the panel right.
A new index "sample" is created on Elasticsearch successfully
Step 5 – Configure Logstash Configuration File
Go to the LogStash installation folder and create a file named "sql.conf" (name doesn't really matter).
Here is the LogStash configuration.
input {
jdbc {
jdbc_connection_string => "jdbc:sqlserver://cc:1433;databaseName=test;integratedSecurity=true;"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_user => "xxx"
statement => "SELECT * FROM table_name"
}
}output {
elasticsearch {
hosts => ["localhost:9200"]
index => "sample"
}
}
Here, All the Database Configurations and the query that is expected to work on the elastic search is defined in input and the port where elastic search is expected to run and the index name are defined in output.
Step 6 – Import Data With Logstash
Run the LogStash service In the LogStash installation location where "sql.conf" is created.
bin\logstash -f sql.conf
-f flag specifies the configuration file to use.
In our case, "sql.conf" which is created in the previous step.
The result of successful LogStash run will look similar to following output.
C:\Users\Naveenr\Documents\Downloads\logstash-6.2.4
> bin\logstash -f sql.conf
Sending Logstash's logs to c:/misc/elasticco/logstash-6.2.2/logs which is now configured via log4j2.properties
[2018-03-17T18:20:37,537][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"c:/misc/elasticco/logstash-6.2.2/modules/fb_apache/configuration"}
[2018-03-17T18:20:37,568][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"c:/misc/elasticco/logstash-6.2.2/modules/netflow/configuration"}
[2018-03-17T18:20:37,990][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-03-17T18:20:39,100][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.2"}
[2018-03-17T18:20:39,975][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-03-17T18:20:46,049][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}[2018-03-17T18:20:46,143][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}[2018-03-17T18:20:46,924][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#"}[2018-03-17T18:20:47,174][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]} [2018-03-17T18:20:48,956][INFO ][logstash.inputs.jdbc ] (0.062000s) SELECT * FROM Users[2018-03-17T18:21:19,117][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#"}
Step 7 – Verify in Kibana And Perform ElasticSearch
Finally the data is imported into elasticsearch. By calling GET sample/_count in kibana console gives the count of records in the table of MYSQL database.
Now open a browser and got to
http://localhost:9200/index_name/search
This will list down all the records that are stored in the elastic repository that is linked with the MYSQL database.(index_name=sample in the above case).
Now got to
http://localhost:9200/index_name/_search?q=column_name:*a*
By,Using a QueryMatcher q and the column name where the search has to be performed with a input pattern like '*input_text*' will list down all the values that is stored in the column which matches the input_text.
In the above URL all the values which contains 'a' in the column called 'column_name' will be shown in the output.
Conclusion:
ElasticSearch with Kibana and Logstash is an efficient way to implement advanced search functionality. The Process involves installing the ETL stack on your system.Run the Elastic Search and Kibana using command prompt and create a index in kibana. Now, add configuration file in logstash with all the required configurations and run it.Now the the advanced search can be performed on the records stored in MYSQL server.
Contact us for further details
Contact us for further details
Naveen Ragamoini
Software Engineer
naveenr.in@mouritech.com
MOURI Tech
Comments
Post a Comment