Posts

Integrate Spark Streaming , kafka and logstash to read and analyze logs on realtime

Below are the simple steps to integrate stark with kafka and logstash: Installation of Logstash: 1. First few steps are for installing and configuring logstash      sudo vi /etc/yum.repos.d/logstash.repo 2. Add below lines in the text file :     [logstash-2.3]     name=Logstash repository for 2.3.x packages     baseurl=https://packages.elastic.co/logstash/2.3/centos     gpgcheck=1     gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch     enabled=1 3. yum install logstash 4. cd /opt/logstash 5. Create a config file logstash-kafka.conf and add the below content:     input {            file {                 path => "/opt/gen_logs/logs/access.log"            }     }    output {              kafka {                       codec =>  plain {                               format => "%{message}"                       }                       topic_id = 'logstash'              }   } 6. Check the configuration using bel

How to add multiple triggers for a job in Quartz

In this example we will see how we can add multiple trigger for a single job in Quartz scheduler. We will follow the following steps : 1.At first lets create a job class. package com.techniqpatch.quartz import org.quartz.Job; import org.quartz.JobExecutionContext; import org.quartz.JobExecutionException; public class TestJob implements Job { @Override public void execute(JobExecutionContext context) throws JobExecutionException { System.out.println("Job is runing"); } } 2.Now lets create a class that will schedule the TestJob with two trigger package com.techniqpatch.quartz; import java.text.ParseException; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import org.quartz.CronScheduleBuilder; import org.quartz.CronTrigger; import org.quartz.JobBuilder; import org.quartz.JobDetail; import org.quartz.JobKey; import org.quartz.Scheduler; import org.quartz.SchedulerException; import org.quartz.Schedule