My Quotes

When U were born , you cried and the world rejoiced
Live U'r life in such a way that when you go

Tuesday, December 8, 2015

Maven Project Version in application

Getting Project Version to be displayed in the pages without much hassles..

pom.xml file changes 

${project.version} in your html file

Repeated columns in JPA

I wanted to define repeated columns in JPA. But when I do, I got the message the second column should be defined as insert="false" , update="false"

 Approach 1: Here is an easy way to do it. In the entity bean define the second property as follows 

        private Date uploadDate;
        @AttributeOverride(name="uploadDateAsTime", column = @Column
                        (name = "UPLOAD_DATE"))
        private Timestamp uploadDateAsTime;

 Approach 2: Here is the alternatie way to do it via XML file.. 

it would take care of giving unique names for enbeddable field columns in target entity class.

                 ddl, callback 

Friday, December 4, 2015

Spring Custom Context Loader

Use Spring Custom Context Loader.

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.annotation.WebListener;

import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;

import com.fra.exception.ApplicationException;
import com.fra.exception.SystemException;
import com.fra.fv.service.IModuleConfigService;

public class CustomContextListener implements ServletContextListener {
 @Qualifier(value = "mdlConfigService")
 IModuleConfigService mdlConfigService;
 final static Logger logger = Logger.getLogger(CustomContextListener.class);

 public CustomContextListener() {

 public void contextInitialized(ServletContextEvent event) {
  try {
  } catch (ApplicationException e) {
   logger.error(ExceptionUtil.getFullExceptionAsString(e, 50));
  } catch (SystemException e) {
   logger.error(ExceptionUtil.getFullExceptionAsString(e, 50));

 public void contextDestroyed(ServletContextEvent event) {

Wednesday, August 5, 2015


Solution 1:   Manually Delete    (For ALL Versions of Outlook)

1. Open Outlook.
2. Type the e-mail address you wish to clear from the cache.
3. Press the Down-Arrow button (on the keyboard) to select the e-mail address.
4. Press the Delete button (on the keyboard).
5. That e-mail entry is now removed from the Outlook Auto-complete cache.

Solution 2:  Empty Auto-Complete List Button    (ONLY for Outlook Versions 2010 & 2013)

1. Open Outlook.
2. Click File | Options.
3. Click on the Mail tab on the right.
4. Scroll down to Send messages and click the Empty Auto-Complete List button.

Solution 3:  Recreate the .nk2 File    (ONLY for Outlook Versions 2003 & 2007)

1. Close Outlook.
2. Open Windows Explorer or Internet Explorer.
3. Paste the following into the Address Bar:  %APPDATA%\Microsoft\Outlook
4. Delete the following file from this folder:   Outlook.nk2

Important Note:  This will delete ALL of your cached e-mail addresses.  It should ONLY be used if you want to wipe clean your cache or if there is corruption issues in your .nk2 file.  The first time you open Outlook after deleting this file, Outlook will create a NEW .nk2 cache file automatically & start caching the e-mail addresses you use from here on out.

Wednesday, January 14, 2015

Flume and Spark Integration

Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and
moving large amounts of log data.

 Approach 1: Flume-style Push-based Approach 
  • When your Flume + Spark Streaming application is launched, one of the Spark workers must run on that machine.
  • Flume can be configured to push data to a port on that machine.
  • Due to the push model, the streaming application needs to be up, with the receiver scheduled and listening on the chosen port, for Flume to be able push data.

  • Configuring Flume
    agent.sinks = avroSink
    agent.sinks.avroSink.type = avro = memoryChannel
    agent.sinks.avroSink.hostname = 
    agent.sinks.avroSink.port = 

    Configuring Spark Streaming Application
    Linking: In your SBT/Maven projrect definition, link your streaming application against the following artifact
    groupId = org.apache.spark
    artifactId = spark-streaming-flume_2.10
    version = 1.1.0

    Programming: In the streaming application code, import FlumeUtils and create input DStream as follows.
    import org.apache.spark.streaming.flume.*;
    JavaReceiverInputDStream flumeStream = FlumeUtils.createStream(streamingContext, [chosen machine's hostname], [chosen port]);
    Note that the hostname should be the same as the one used by the resource manager in the cluster , so that resource allocation can match the names and launch the receiver in the right machine

    Deploying: Package spark-streaming-flume_2.10 and its dependencies (except spark-core_2.10 and spark-streaming_2.10 which are provided by spark-submit) into the application JAR. Then use spark-submit to launch your application