My Quotes


When U were born , you cried and the world rejoiced
Live U'r life in such a way that when you go
THE WORLD SHOULD CRY






Tuesday, December 8, 2015

Repeated columns in JPA


I wanted to define repeated columns in JPA. But when I do, I got the message the second column should be defined as insert="false" , update="false"


 Approach 1: Here is an easy way to do it. In the entity bean define the second property as follows 

        @Column(name="UPLOAD_DATE")
        private Date uploadDate;
        @AttributeOverride(name="uploadDateAsTime", column = @Column
                        (name = "UPLOAD_DATE"))
        private Timestamp uploadDateAsTime;
        @Column(name="UPLOAD_STATUS")
 

 Approach 2: Here is the alternatie way to do it via XML file.. 


it would take care of giving unique names for enbeddable field columns in target entity class.

                
                
                 ddl, callback 
                

Friday, December 4, 2015

Spring Custom Context Loader


Use Spring Custom Context Loader.


import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.annotation.WebListener;

import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.web.context.support.SpringBeanAutowiringSupport;

import com.fra.exception.ApplicationException;
import com.fra.exception.SystemException;
import com.fra.fv.service.IModuleConfigService;

@WebListener
public class CustomContextListener implements ServletContextListener {
 @Autowired
 @Qualifier(value = "mdlConfigService")
 IModuleConfigService mdlConfigService;
 final static Logger logger = Logger.getLogger(CustomContextListener.class);

 public CustomContextListener() {
  super();
 }

 @Override
 public void contextInitialized(ServletContextEvent event) {
  SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext(this);
  try {
   mdlConfigService.loadModuleConfigs();
  } catch (ApplicationException e) {
   logger.error(ExceptionUtil.getFullExceptionAsString(e, 50));
  } catch (SystemException e) {
   logger.error(ExceptionUtil.getFullExceptionAsString(e, 50));
  }
 }

 @Override
 public void contextDestroyed(ServletContextEvent event) {
 }
}

Wednesday, August 5, 2015

Clear OUTLOOK CACHE


Solution 1:   Manually Delete    (For ALL Versions of Outlook)

1. Open Outlook.
2. Type the e-mail address you wish to clear from the cache.
3. Press the Down-Arrow button (on the keyboard) to select the e-mail address.
4. Press the Delete button (on the keyboard).
5. That e-mail entry is now removed from the Outlook Auto-complete cache.

Solution 2:  Empty Auto-Complete List Button    (ONLY for Outlook Versions 2010 & 2013)

1. Open Outlook.
2. Click File | Options.
3. Click on the Mail tab on the right.
4. Scroll down to Send messages and click the Empty Auto-Complete List button.

Solution 3:  Recreate the .nk2 File    (ONLY for Outlook Versions 2003 & 2007)

1. Close Outlook.
2. Open Windows Explorer or Internet Explorer.
3. Paste the following into the Address Bar:  %APPDATA%\Microsoft\Outlook
4. Delete the following file from this folder:   Outlook.nk2

Important Note:  This will delete ALL of your cached e-mail addresses.  It should ONLY be used if you want to wipe clean your cache or if there is corruption issues in your .nk2 file.  The first time you open Outlook after deleting this file, Outlook will create a NEW .nk2 cache file automatically & start caching the e-mail addresses you use from here on out.

Wednesday, January 14, 2015

Flume and Spark Integration



Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and
moving large amounts of log data.


 Approach 1: Flume-style Push-based Approach 
  • When your Flume + Spark Streaming application is launched, one of the Spark workers must run on that machine.
  • Flume can be configured to push data to a port on that machine.
  • Due to the push model, the streaming application needs to be up, with the receiver scheduled and listening on the chosen port, for Flume to be able push data.

  • Configuring Flume
    agent.sinks = avroSink
    agent.sinks.avroSink.type = avro
    agent.sinks.avroSink.channel = memoryChannel
    agent.sinks.avroSink.hostname = 
    agent.sinks.avroSink.port = 
    

    
    Configuring Spark Streaming Application
    Linking: In your SBT/Maven projrect definition, link your streaming application against the following artifact
    
    
    groupId = org.apache.spark
    artifactId = spark-streaming-flume_2.10
    version = 1.1.0
    

    Programming: In the streaming application code, import FlumeUtils and create input DStream as follows.
    import org.apache.spark.streaming.flume.*;
    JavaReceiverInputDStream flumeStream = FlumeUtils.createStream(streamingContext, [chosen machine's hostname], [chosen port]);
    Note that the hostname should be the same as the one used by the resource manager in the cluster , so that resource allocation can match the names and launch the receiver in the right machine
    

    
    Deploying: Package spark-streaming-flume_2.10 and its dependencies (except spark-core_2.10 and spark-streaming_2.10 which are provided by spark-submit) into the application JAR. Then use spark-submit to launch your application
    
    

    Tuesday, December 23, 2014

    
    To empty the Message Queue(IBM MQ). 
    
    
  • Pom.xml entries.
    
     
      mycompanygrpid
      parent
      0.0.1
     
     4.0.0
     com.test.mqtest
     mymqpom
     1.0.0
     war
     Test MQ
     Test MQ
             
      UTF-8
                     7.0.0.0
      1.0.0.0
     
            
      
      
       middleware
       com.ibm.mq
       ${webSphereMQVersion}
      
      
       middleware
       com.ibm.mq.jmqi
       ${webSphereMQVersion}
      
      
       middleware
       com.ibm.mq.jmqi.remote
       ${webSphereMQVersion}
      
      
       middleware
       com.ibm.mq.jmqi.system
       ${webSphereMQVersion}
      
      
       middleware
       com.ibm.mqjms
       ${webSphereMQVersion}
      
      
      
       middleware
       com.ibm.msg.client.commonservices
       ${webSphereMQClientVersion}
      
      
       middleware
       com.ibm.msg.client.commonservices.j2se
       ${webSphereMQClientVersion}
      
      
       middleware
       com.ibm.msg.client.jms
       ${webSphereMQClientVersion}
      
      
       middleware
       com.ibm.msg.client.jms.internal
       ${webSphereMQClientVersion}
      
      
       middleware
       com.ibm.msg.client.provider
       ${webSphereMQClientVersion}
      
      
       middleware
       com.ibm.msg.client.wmq
       ${webSphereMQVersion}
      
      
       middleware
       com.ibm.msg.client.wmq.common
       ${webSphereMQVersion}
      
      
       middleware
       com.ibm.msg.client.wmq.factories
       ${webSphereMQVersion}
      
      
       middleware
       dhbcore
       DH610-GOLD
      
      
       org.glassfish
       javax.jms
       10.0-b28
      
    
    

  • Actual code to clean up the queue.
      private void emptyIt()  {
            MQQueueManager _queueManager = null;
            MQQueue queue = null;
           int openOptions = MQC.MQOO_INQUIRE + MQC.MQOO_FAIL_IF_QUIESCING + MQC.MQOO_INPUT_SHARED;
           try{
           _queueManager = new MQQueueManager(qManager);         
           queue = _queueManager.accessQueue(inputQName, openOptions, null, null, null);
           System.out.println("EmptyQ: Opened queue "+inputQName);
    
           int depth = queue.getCurrentDepth();
           System.out.println("EmptyQ: Current depth: " + depth);
    
           MQGetMessageOptions getOptions = new MQGetMessageOptions();
           getOptions.options = MQC.MQGMO_NO_WAIT + MQC.MQGMO_FAIL_IF_QUIESCING + MQC.MQGMO_ACCEPT_TRUNCATED_MSG;
    
           MQMessage message;
           while (loopAgain){
       message = new MQMessage();
       try {
          queue.get(message, getOptions, 1);
       }catch (MQException e){
          if (e.completionCode == 1 && e.reasonCode == MQException.MQRC_TRUNCATED_MSG_ACCEPTED) {
              // Just what we expected!!
          }
          else{
             loopAgain = false;
             if (e.completionCode == 2 && e.reasonCode == MQException.MQRC_NO_MSG_AVAILABLE){
         // Good, we are now done - no error!!
             } else {
         System.err.println("EmptyQ: MQException: " + e.getLocalizedMessage());
             }
          }
       }
           } // close of while loop
          System.out.println("EmptyQ: Queue emptied.");
       }catch (MQException e1){
          System.err.println("EmptyQ: MQException: " + e1.getLocalizedMessage());
       }
       finally {
          if (queue != null){
             queue.close();
          }
    
          if (_queueManager != null){
             _queueManager.disconnect();
          }
       }
      }
     
  • Monday, November 24, 2014

    Configue Sonar with eclipse

    Externalize Named Queries in JPA

    
    Externalize Named Queries in JPA 
    
    
    
    Main XML file 
    
    
    
    
     
      java:/someDS
      
      META-INF/myproject/sample.xml
            
    
    

    
    sample.xml (having named query)
    
    

    
    
     
      
       SELECT 
        website
       FROM 
        Website website
       WHERE
        website.userId = :userId
      
     
     
      
       ...