My Quotes


When U were born , you cried and the world rejoiced
Live U'r life in such a way that when you go
THE WORLD SHOULD CRY






Tuesday, March 13, 2018

JPA Static Meta Model

  1. When you write a criteria query or create a dynamic entity graph, you need to reference the entity classes and their attributes.
  2. The quickest and easiest way is to provide the required names as Strings.
  3. But this has several drawbacks, e.g. you have to remember or look-up all the names of the entity attributes when you write the query.
  4. But it will also cause even greater issues at later phases of the project, if you have to refactor your entities and change the names of some attributes.
  5. In that case you have to use the search function of your IDE and try to find all Strings that reference the changed attributes.
  6. This is a tedious and error prone activity which will easily take up the most time of the refactoring
  7. Use the static metamodel to write criteria queries and dynamic entity graphs.
  8. This is a small feature defined by the JPA specification which provides a type-safe way to reference the entities and their properties.
  1. The Metamodel Generator also takes into consideration xml configuration specified in orm.xml or mapping files specified in persistence.xml. However, if all configuration is in XML you need to add in at least on of the mapping file the following persistence unit metadata:
    
      
    
    
  2. Maven dependency: The jar file for the annotation processor can be found as below.
    
        org.hibernate
        hibernate-jpamodelgen
        1.0.0
    
    
  3. Maven compiler plugin configuration - direct execution

    
        maven-compiler-plugin
        
            1.6
            1.6
            
                org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor
            
        
    
    
  4. Maven compiler plugin configuration - indirect execution
    
        maven-compiler-plugin
        
            1.6
            1.6
            -proc:none
        
    
    

  5. Configuration with maven-processor-plugin
    
        org.bsc.maven
        maven-processor-plugin
        2.0.5
        
            
                process
                
                    process
                
                generate-sources
                
                                                    
                        org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor
                    
                
            
        
        
            
                org.hibernate
                hibernate-jpamodelgen
                1.2.0.Final
            
        
    
    
  6. Javac Task configuration
    As mentioned before, the annotation processor will run automatically each time the Java compiler is called, provided the jar file is on the classpath.


    
        
    
    
  7. IDE Configuration
  1. A simple entity for this example.
    @Entity
    @Table(name="ALERT")
    public class AlertEO implements java.io.Serializable{
    
     private static final long serialVersionUID = 1L;
     private Integer id;
     private String name;
     private String description;
     /**
      * method to get serial Id
      * 
      * @return id
      */
     @Id
     @Column(name="id")
     @GeneratedValue(strategy = GenerationType.AUTO)
     public Integer getId() {
      return id;
     }
     
     /**
      * Functions to get id
      * @return id
      */
     public void setId(Integer id){
      this.id = id;
     }
    
     /**
      * Functions to get name
      * @return name
      */
     @Column(name = "name")
     public String getName(){
      return name;
     }
     
     /**
      * Functions to set name
      * @return name
      */
     public void setName(String name){
      this.name = name;
     }
    
     /**
      * Functions to get description
      * @return description
      */
     @Column(name = "description")
     public String getDescription(){
      return description;
     }
    
     /**
      * Functions to set description
      * @return description
      */
     public void setDescription(String description){
      this.description=description;
     }
      /* (non-Javadoc)
      * @see java.lang.Object#toString()
      */
     @Override
     public String toString() {
      return "AlertEO [id=" + id + ", name=" + name + ", description=" + description + "]";
     }
    



  2. The class of the static metamodel looks similar to the entity.
    Based on the JPA specification, there is a corresponding metamodel class for every managed class in the persistence unit.
    You can find it in the same package and it has the same name as the corresponding managed class with an added ‘_’ at the end

    @Generated(value = "org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor")
    @StaticMetamodel(AlertEO.class)
    public abstract class AlertEO_{
     public static volatile SingularAttribute<AlertEO, String>firstName;
     public static volatile SingularAttribute<AlertEO, String> lastName;
     public static volatile SetAttribute<AlertEO, Book> books;
     public static volatile SingularAttribute<AlertEO, Long> id;
     public static volatile SingularAttribute<AlertEO, Integer> version;
    }
    

  3. Using metamodel classes
  4. You can use the metamodel classes in the same way as you use the String reference to the entities and attributes.
  5. The APIs for criteria queries and dynamic entity graphs provide overloaded methods that accept Strings and implementations of the Attribute interface.

    CriteriaBuilder cb = this.em.getCriteriaBuilder();
    // create the query
    CriteriaQuey<AlertEO> q = cb.createQuery(AlertEO.class);
    // set the root class
    Root<AlertEO> a = q.from(AlertEO.class);
    // use metadata class to define the where clause
    q.where(cb.like(a.get(AlertEO_.name), "J%"));
    // perform query
    this.em.createQuery(q).getResultList();
    

Wednesday, January 10, 2018

ELK - Elastic, LogStash and Amazon Kibana - alternative for SPLUNK

ELK - Architecture

For more information on Kibana here is a nice article
 KIBANA SEARCH 

  1. Step 1- Install Elasticsearch
    1. Download elasticsearch zip file from https://www.elastic.co/downloads/elasticsearch
    2. Extract it to a directory (unzip it)
    3. Run it (bin/elasticsearch or bin/elasticsearch.bat on Windows)
    4. Check that it runs using curl -XGET http://localhost:9200
    5. Here's how to do it (steps are written for OS X but should be similar on other systems):
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.zip
unzip elasticsearch-1.7.1.zip
cd elasticsearch-1.7.1
bin/elasticsearch
  1. Elasticsearch should be running now. You can verify it's running using curl. In a separate terminal window execute a GET request to Elasticsearch's status page:
curl -XGET http://localhost:9200
  1. If all is well, you should get the following result:
{
  "status" : 200,
  "name" : "Tartarus",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.7.1",
    "build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
    "build_timestamp" : "2015-07-29T09:54:16Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}
  1. Step 2 - Install Kibana 4
  2. Download Kibana archive from https://www.elastic.co/downloads/kibana
  3. Please note that you need to download appropriate distribution for your OS, URL given in examples below is for OS X
  4. Extract the archive
  5. Run it (bin/kibana)
  6. Check that it runs by pointing the browser to the Kibana's WebUI
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-darwin-x64.tar.gz
tar xvzf kibana-4.1.1-darwin-x64.tar.gz
cd kibana-4.1.1-darwin-x64
bin/kibana
  1. Point your browser to http://localhost:5601 (if Kibana page shows up, we're good - we'll configure it later)
  1. Step 3) Install Logstash
  2. Download Logstash zip from https://www.elastic.co/downloads/logstash
  3. Extract it (unzip it)
wget https://download.elastic.co/logstash/logstash/logstash-1.5.3.zip
unzip logstash-1.5.3.zip
  1. Step 4) Configure Spring Boot's Log File
  2. In order to have Logstash ship log files to Elasticsearch, we must first configure Spring Boot to store log entries into a file.
  3. We will establish the following pipeline: Spring Boot App --> Log File --> Logstash --> Elasticsearch.
  4. There are other ways of accomplishing the same thing, such as configuring logback to use TCP appender to send logs to a remote Logstash instance via TCP, and many other configurations.
  5. Anyhow, let's configure Spring Boot's log file.
  6. The simplest way to do this is to configure log file name in application.properties.
  7. It's enough to add the following line:
logging.file=application.log
Spring Boot will now log ERROR, WARN and INFO level messages in the application.log log file and will also rotate it as it reaches 10 Mb.
  1. Step 5) Configure Logstash to Understand Spring Boot's Log File Format
  2. Typical Logstash config file consists of three main sections: input, filter and output.
  3. Each section contains plugins that do relevant part of the processing
  4. such as file input plugin that reads log events from a file or elasticsearch output plugin which sends log events to Elasticsearch.
  5. Input section defines from where Logstash will read input data
  6. in our case it will be a file hence we will use a file plugin with multiline codec, which basically means that our input file may have multiple lines per log entry.
input {
  file {
    type => "java"
    path => "/path/to/application.log"
    codec => multiline {
      pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*"
      negate => "true"
      what => "previous"
    }
  }
}
  1. Explanation
  2. We're using file plugin.
  3. type is set to java - it's just additional piece of metadata in case you will use multiple types of log files in the future.
  4. path is the absolute path to the log file. It must be absolute - Logstash is picky about this.
  5. We're using multiline codec which means that multiple lines may correspond to a single log event,
  6. In order to detect lines that should logically be grouped with a previous line we use a detection pattern:
  7. pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*" ? Each new log event needs to start with date.
  8. negate => "true" ? if it doesn't start with a date ...
  9. what => "previous" ? ... then it should be grouped with a previous line.
  10. File input plugin, as configured, will tail the log file (e.g. only read new entries at the end of the file). Therefore, when testing, in order for Logstash to read something you will need to generate new log entries.
  1. Filter Section
  2. Filter section contains plugins that perform intermediary processing on an a log event.
  3. In our case, event will either be a single log line or multiline log event grouped according to the rules described above.
  4. In the filter section we will do several things:
  5. Tag a log event if it contains a stacktrace. This will be useful when searching for exceptions later on.
  6. Parse out (or grok, in logstash terminology) timestamp, log level, pid, thread, class name (logger actually) and log message.
  7. Specified timestamp field and format - Kibana will use that later for time based searches.
  8. Filter section for Spring Boot's log format that aforementioned things looks like this:
filter {
  #If log line contains tab character followed by 'at' then we will tag that entry as stacktrace
  if [message] =~ "\tat" {
    grok {
      match => ["message", "^(\tat)"]
      add_tag => ["stacktrace"]
    }
  }

  #Grokking Spring Boot's default log format
  grok {
    match => [ "message", 
               "(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME})  %{LOGLEVEL:level} %{NUMBER:pid} --- \[(?[A-Za-z0-9-]+)\] [A-Za-z0-9.]*\.(?[A-Za-z0-9#_]+)\s*:\s+(?.*)",
               "message",
               "(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME})  %{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?.*)"
             ]
  }

  #Parsing out timestamps which are in timestamp field thanks to previous grok section
  date {
    match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
  }
}
  1. Explanation:
  2. if [message] =~ "\tat" ? If message contains tab character followed by at (this is ruby syntax) then...
  3. se the grok plugin to tag stacktraces:
  4. match => ["message", "^(\tat)"] ? when message matches beginning of the line followed by tab followed by at then..
  5. add_tag => ["stacktrace"] ? ... tag the event with stacktrace tag.
  6. Use the grok plugin for regular Spring Boot log message parsing:
  7. First pattern extracts timestamp, level, pid, thread, class name (this is actually logger name) and the log message.
  8. Unfortunately, some log messages don't have logger name that resembles a class name (for example, Tomcat logs) hence the second pattern that will skip the logger/class field and parse out timestamp, level, pid, thread and the log message.
  9. Use date plugin to parse and set the event date:
  10. match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ] ? timestamp field (grokked earlier) contains the timestamp in the specified format
  1. Output Section
  2. Output section contains output plugins that send event data to a particular destination.
  3. Outputs are the final stage in the event pipeline.
  4. We will be sending our log events to stdout (console output, for debugging) and to Elasticsearch.
  5. Compared to filter section, output section is rather straightforward:
output {
  # Print each event to stdout, useful for debugging. Should be commented out in production.
  # Enabling 'rubydebug' codec on the stdout output will make logstash
  # pretty-print the entire event as something similar to a JSON representation.
  stdout {
    codec => rubydebug
  }

  # Sending properly parsed log events to elasticsearch
  elasticsearch {
   hosts => ["127.0.0.1"]  #  takes an array of hosts (e.g. elasticsearch cluster) as value. 
  }
}
  1. Putting it all together
  2. Finally, the three parts - input, filter and output - need to be copy pasted together and saved into logstash.conf config file.
  3. Once the config file is in place and Elasticsearch is running, we can run Logstash:
  4. /path/to/logstash/bin/logstash -f logstash.conf
  5. If everything went well, Logstash is now shipping log events to Elasticsearch.
  1. Step 6) Configure Kibana
  2. Ok, now it's time to visit the Kibana web UI again.
  3. We have started it in step 2 and it should be running at http://localhost:5601.
  4. First, you need to point Kibana to Elasticsearch index(s) of your choice.
  5. Logstash creates indices with the name pattern of logstash-YYYY.MM.DD.
  6. In Kibana Settings --> Indices configure the indices:
  7. Index contains time-based events (select this option)
  8. Use event times to create index names (select this option)
  9. Index pattern interval: Daily
  10. Index name or pattern: [logstash-]YYYY.MM.DD
  11. Click on "Create Index"
  12. Now click on "Discover" tab.
  13. It is the places for "Search" because it allows you to perform new searches and also to save/manage them.
  14. Log events should be showing up now in the main window.
  15. If they're not, then double check the time period filter in to right corner of the screen.
  16. Default table will have 2 columns by default: Time and _source.
  17. In order to make the listing more useful, we can configure the displayed columns.
  18. From the menu on the left select level, class and logmessage.
 Here is a sample output screent shot of the kibana console