Monthly Archives: April 2011

Enabling Logback JMX MBean In Glassfish

Logback support for JMX can be enabled by just adding a <jmxconfigurator/> tag in the configuration file but this is not how a web application should register the logback MBean if you web application is deployed in a glassfish application server. If you do it you would get the below error message on accessing the logback MBean through JConsole –

Problem invoking getLoggerLevel : java.lang.IllegalStateException: WEB9031: WebappClassLoader unable to load resource[java.lang.Object], because it has not been started or was always stopped”

The root cause is an integration issue between logback and glassfish. On startup glassfish instantiates two classloaders for each of the deployed web application. The first one is a temporary one which is later destroyed.

When your web-application classes get loaded with this temporary classloader, logback initialization gets triggered (if the Logger instances are declared as static instances in your application classes which is mostly the case) like I explain in my older post. During this initialization process logback creates a JMX MBean instance (JMXConfigurator.class) and registers it with the MBean server. At certain point glassfish destroys the temporary classloader and nullifies all the static or final fields from the loaded classes. At a later step a new classloader is instantiated which would be the actual classloader for the web application. This classloader starts & loads the web context and the deployment continues.

The above mentioned error is displayed because while nullifying the classes during the destroy step of the temporary classloader, the registered MBean is not unregistered and logback does not re-register the MBean during the logback initialization step (which is triggered after the actual class loader is instantiated) since a MBean with that name was already registered. Here is short snippet of code from the JMXConfiguratorAction class which does the MBean registration.

    MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
    if(!MBeanUtil.isRegistered(mbs, objectName)) {
      // register only of the named JMXConfigurator has not been previously
      // registered. Unregistering an MBean within invocation of itself
      // caused jconsole to throw an NPE. (This occurs when the reload* method
      // unregisters the
      JMXConfigurator jmxConfigurator = new JMXConfigurator((LoggerContext) context, mbs,
          objectName);
      try {
        mbs.registerMBean(jmxConfigurator, objectName);
      } catch (Exception e) {
        addError("Failed to create mbean", e);
      }
     }

A mystery would be to understand the reason behind glassfish creating a temporary classloader(It is hard to figure it out exactly from the source code 🙂 ).
One simple workaround that can be implemented to resolve this issue is not add the <jmxConfigurator/> tag in the logback configuration file and to manually register the logback jmx mbean in a servlet context listener. Have a look at JMXConfiguratorAction.class for more details on how to register the logback MBean.

This note could be a good addition to the well documented Logback manual :).

Advertisements

Glassfish Source Code

The source code of glassfish can be downloaded using the svn url – https://svn.java.net/svn/glassfish~svn. If you want to download the source code for a specific version of glassfish, browse the svn repository and add the relevant tag suffix. For e.g. to download the source code for glassfish 3.1 use the url – https://svn.java.net/svn/glassfish~svn/tags/3.1.

By the way – To know the version of the glassfish server used for deployment, use the command – asadmin version. The output of the command would include the build version to help you figure out the tag to be used while downloading the source code.

How to find if a method returns void?

Normally this is the question which comes when using reflection to find the return type of a method. Void class comes to rescue.

   public class VoidClassApp {
       public static void main(String[] args) throws NoSuchMethodException {
          Class clazz = Test.class.getMethod("method").getReturnType();
          System.out.println(clazz == Void.TYPE);
       }
   }

   class Test {
      public void method() {
         System.out.println("Does Nothing");
      }
  }

The output is : true

Understanding Web Server – Need of a web server

As an introduction a Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols.

This post tries to provide some details on the need of having a web server as part of your application architecture. Below are some of the points to take into consideration

  • Clustering. By using Apache as a front end you can let Apache act as a front door to your content to multiple Tomcat instances. If one of your Tomcats fails, Apache ignores it and your Sysadmin can sleep through the night. This point could be ignored if you use a hardware loadbalancer and Tomcat’s clustering capabilities.
  • Clustering/Security. You can also use Apache as a front door to different Tomcats for different URL namespaces (/app1/, /app2/, /app3/, or virtual hosts). The Tomcats can then be each in a protected area and from a security point of view, you only need to worry about the Apache server. Essentially, Apache becomes a smart proxy server.
  • Security. This topic can sway one either way. Java has the security manager while Apache has a larger mindshare and more tricks with respect to security. Depending on your scenario, one might be better than the other. But also keep in mind, if you run Apache with Tomcat – you have two systems to defend, not one.
  • Add-ons. Adding on CGI, perl, PHP is very natural to Apache. Its slower and more of a kludge for Tomcat. Apache also has hundreds of modules that can be plugged in at will. Tomcat can have this ability, but the code hasn’t been written yet.
  • Decorators- With Apache in front of Tomcat, you can perform any number of decorators that Tomcat doesn’t support or doesn’t have the immediate code support. For example, mod_headers, mod_rewrite, and mod_alias could be written for Tomcat, but why reinvent the wheel when Apache has done it so well?
  • Speed. Apache is faster at serving static content than Tomcat. But unless you have a high traffic site, this point is useless. But in some scenarios, tomcat can be faster than Apache httpd. So benchmark YOUR site. Tomcat can perform at httpd speeds when using the proper connector (APR with sendFile enabled). Speed should not be considered a factor when choosing between Apache httpd and Tomcat
  • Socket handling/system stability. Apache has better socket handling with respect to error conditions than Tomcat. The main reason is Tomcat must perform all its socket handling via the JVM which needs to be cross platform. The problem is socket optimization is a platform specific ordeal. Most of the time the java code is fine, but when you are also bombarded with dropped connections, invalid packets, invalid requests from invalid IP’s, Apache does a better job at dropping these error conditions than JVM based program. (YMMV)

Inspired from this discussion on stackoverflow.

MediaWiki Editor Help

It’s tough to edit a wiki page but this link can make your job a bit easier. If you want to add some code (java, xml or any other language) just add a <pre> tag before and after the code snippet so that the formatting in the code is preserved as is.

Compile time bindings with slf4j

SLF4j (Simple Logging Facade for Java) is an api which is implemented by various logging frameworks(log4j, logback, jul*, jcl** etc). The SLF4J distribution ships with several jar files referred to as “SLF4J bindings”, with each binding corresponding to a supported framework.

The SLF4j Manual says

SLF4J does not rely on any special class loader machinery. In fact, the each SLF4J binding is hardwired at compile time to use one and only one specific logging framework. For example, the slf4j-log12-1.6.1.jar binding is bound at compile time to use log4j.

I was curious to understand how slf4j binds to the logging implementation at compile-time. After looking at the code I got a better picture. This is how it works

1. Any class which wants to log includes a call to LoggerFactory.getLogger(UserClass.class.getName);

2.  LoggerFactory.getLogger() method tries to load a class with qualified name “org/slf4j/impl/StaticLoggerBinder.class” (if not already loaded). org.slf4j.impl.StaticLoggerBinder class is supposed to present in any logging framework binder library (for e.g. logback) which implements slf4j. If there is no such class found, a default no logger implementation is used. If more then one implementations are found an error is printed. This implementation can be found in singleImplementationSanityCheck() method of LoggerFactory class.

3. StaticLoggerBinder is suppose to be an singleton class. The LoggerFactory code then calls StaticLoggerBinder.getSingleton() method which is also supposed to be present in the StaticLoggerBinder class.

4. The StaticLoggerBinder.getSingleton() method then takes it over from there to instantiate the LoggerContext (one of the interfaces exposed in the slf4j api).

This code explains how slf4j binds with the logging framework. It also explains why slf4j distribution comes with the binding libraries for different logging frameworks like log4j. Logback has a native implementation of slf4j hence slf4j distribution does not need to have a binding jar for logback.

This post proves the slogan “only in code we trust” 🙂

*jul – java.util.logging
**jcl – jakarta commons logging.

Spring Singletons and Lifecycle Annotations

A small snippet of code to help me present the topic


@Service(name="singletonServiceBean")
@Scope(value = "singleton") //this is the default value. Explicitly setting it just for readability.
public class SingletonServiceBean {

      private Logger logger = LoggerFactory.getLogger(SingletonServiceBean.class);

      public SingletonServiceBean() {
         logger.debug("Constructor called for the service bean");
      }

      @PostConstruct
      public void initialize() {
         logger.debug("Initialize the bean in PostConstruct");
      }
}

About Singleton Beans –
After you create an application context you will notice the constructor getting called twice sometimes even if the bean is declared as singleton. The reason is – spring aop is proxy-based, i.e. the container creates ‘raw’ bean instance and proxy for it if the bean is affected by AOP rules. If you use cglib-based proxies than the proxy class is created at runtime via subclassing original bean class. So, when proxy object is created, it calls superclass constructor and it looks as two instances are created

@PostConstruct and @Predestroy- Some introduction on this

The @PostConstruct and @PreDestroy annotations(introduced with Spring 2.5) can be used to trigger Spring initialization and destruction callbacks respectively. This feature extends but does not replace the two options –  i. implementing Spring’s InitializingBean and DisposableBean interfaces
ii. explicitly declaring “init-method” and “destroy-method” attributes of the ‘bean’ element in xml. This explicit configuration is sometimes necessary, such as when the callbacks need to be invoked on code that is outside of the developer’s control. For e.g. – when a third party DataSource is used, a ‘‘destroy-method’ is declared explicitly.

To enable these lifecycle annotations we need to just add the below tag in the spring xml

<context:annotation-config/>

There are two reasons of using the @PostConstruct annotation instead of using a constructor
1. because when the constructor is called, the bean is not yet initialized – i.e. no dependencies are injected. In the @PostConstruct method the bean is fully initialized and you can use the dependencies.

2. because this is the contract that guarantees that this method will be invoked only once in the bean lifecycle. It may happen (like in the singletonServiceBean example above) that a bean is instantiated multiple times by the container in its internal working, but it guarantees that @PostConstruct will be invoked only once.

One interesting point of the annotation-based lifecycle callbacks is that more than one method may carry either annotation, and all annotated methods will be invoked

JPA Query – getSingleResult() – Not a good API

JPA Query API has two methods to retrieve results from an select query.

1. getSingleResult() – Execute a SELECT query that returns a single untyped result.

2. getResultList() – Execute a SELECT query and return the query results as an untyped List.

The weird part of the getSingleResult() method is that it throws a NoResultException – if there is no result found. NoResultException is a runtime exception.
According to the EffectiveJava “Use checked exceptions for conditions from wich the caller can reasonably be expected to recover. Use runtime exceptions to indicate programming errors”. Getting no result from an select query is not a programming error.

The workaround to overcome this api is to use getResultList() method and check if the list is empty. getSingleResult() can only be used in situations where “I am totally sure that this record exists. Shoot me if it doesn’t

Smart DML Queries With Hibernate Collections

This post details about the insert, update and delete queries executed while working with list and set collections of basic objects (i.e. string, integer, long etc). When using JPA 2 Standard an entity containing an ElementCollection looks like

@Entity
public class Employee {
  @Id
  @Column(name="EMP_ID")
  private long id;
  ...
  @ElementCollection
  @CollectionTable(
        name="PHONE",
        joinColumns=@JoinColumn(name="OWNER_ID")
  )
  @Column(name="PHONE_NUMBER")
  @OrderColumn(name="PHONE_ORDER")
  private List<String> phones;
  ...
}

The operations when the DML queries are executed for a List collection type are :

Deletes – executed when the size of the collection shrinks and when null values are added
Inserts – executed when the size of the collection grows
Updates – executed when the value or the order changes

The order in which these are executed is – Deletes -> Updates -> Inserts.

For e.g. If the list has values {“1”, “2”, “3”, “4”}

example 1 – modify the list by executing the below code
entity.getPhoneNumbers().add(“5”);
entity.getPhoneNumbers().add(“6”);
entity.getPhoneNumbers().add(“7”);
entity.getPhoneNumbers().add(“8”);

entityManager.merge(entity);

4 insert queries would be executed

example 2 – modify the list as follows
entity.getCreditCardNumbers().clear();
entity.getPhoneNumbers().add(“5”);
entity.getPhoneNumbers().add(“6”);
entity.getPhoneNumbers().add(“7”);
entity.getPhoneNumbers().add(“8”);

only 1 update query would be executed since there were already 4 elements in the list {“1”, “2”, “3”, “4”}.

For an attached entity if the list reference is changed, hibernate all the entries are deleted and re-inserted irrespective of whether any value changes are made to the list or not.

The methods getDeletes(), needsInserting() and needsUpdating() in org.hibernate.collection.PersistentList class can provide more details of the internal workings.

The number of queries fired vary slightly when Set is used. The set collection type is not update-able (i.e update queries are never executed). So in scenarios when the size of the set remains the same but the values change a series of delete and insert queries are executed.

This behavior of the Set indicates us to have a small optimization – Use a List (since its update-able) instead of a Set when your database can manage to avoid duplicate values.

Log to same file from multiple jvms using logback

Logback includes a feature which allows multiple jvms potentially running on different hosts to log to the same log file. This feature can be enabled by setting “prudent” property of an FileAppender to true.

<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
         <!-- Support multiple-JVM writing to the same log file -->
         <prudent>true</prudent>
</appender>

Note – RollingFileAppender extends FileAppender with the capability to rollover log files.

This feature is implemented using the file locking API introduced in JDK 1.4.  The two threads that contend for the same file may be in different JVMs, or one may be a Java thread and the other some native thread in the operating system. The file locks are visible to other operating system processes because Java file locking maps directly to the native operating system locking facility.
A small code snippet from the FileAppender class implementing the prudent feature from logback looks like


public class FileAppender extends OutputStreamAppender {

@Override
protected void writeOut(E event) throws IOException {
      if (prudent) {
         safeWrite(event);
      } else {
         super.writeOut(event);
      }
}

final private void safeWrite(E event) throws IOException {
     ResilientFileOutputStream resilientFOS =
             (ResilientFileOutputStream) getOutputStream();
     FileChannel fileChannel = resilientFOS.getChannel();
     if (fileChannel == null) {
        return;
     }
     FileLock fileLock = null;
     try {
       fileLock = fileChannel.lock();
       long position = fileChannel.position();
       long size = fileChannel.size();
       if (size != position) {
         fileChannel.position(size);
       }
       super.writeOut(event);
     } finally {
       if (fileLock != null) {
         fileLock.release();
     }
}

}