Category Archives: hibernate

Configuring Ehcache with JPA, Hibernate & Spring

This post details on the steps required to configure ehcache as the second level hibernate cache when jpa, hibernate and spring are used in combination

Configuring the cache

Add the below properties to the persistence.xml configuration file

<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider"/>
<property name="hibernate.generate_statistics" value="true"/>

Read the rest of this entry


Inverse Relationship In Hibernate And JPA

Inverse attribute in hibernate defines which side is responsible of the association maintenance. In a one-to-many (bidirectional) or many-to-many relationship the side having inverse=”false” (default value) has this responsibility (and will create the appropriate SQL query – insert, update or delete). Changes made to the association on the side of the inverse=”true” are not persisted in DB.
In Hibernate, only the relationship owner should maintain the relationship, and the “inverse” keyword is created to define which side is the owner to maintain the relationship.

Few good links explaining this concept can be found here, here and here.

Inverse attribute with JPA Annotations

inverse=”true” is an attribute to be specified in the hbm.xml but in JPA it is specified using the mappedBy attribute of an @OneToMany annotation.

Supporting Custom Isolation Levels With JPA

This post provides a workaround for supporting custom isolation levels with JPA. If you use the HibernateJpaDialect (provided by spring)  and try to set an isolation level different from the default one you would get an exception “Standard JPA does not support custom isolation levels – use a special JpaDialect”

The above error would be seen when your configuration is as below

<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" lazy-init="true">
   <property name="entityManagerFactory" ref="entityManagerFactory"/>

<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
        <property name="persistenceUnitName" value="spmsoftwareunit"/>
        <property name="persistenceXmlLocation" value="classpath:com/spmsoftware/sampleapp/conf/persistence.xml"/>
        <property name="dataSource" ref="oracleDataSourceFactoryBean"/>
        <property name="persistenceProviderClass" value="org.hibernate.ejb.HibernatePersistence"/>
        <property name="jpaDialect">
            <bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect"/>
        <property name="jpaProperties">
                <prop key="">true</prop>
                <prop key="hibernate.cache.use_second_level_cache">false</prop>
                <prop key="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</prop>
                <prop key="hibernate.jdbc.batch_size">3</prop>
                <prop key="hibernate.jdbc.batch_versioned_data">true</prop>
                <prop key="hibernate.jdbc.factory_class">com.spmsoftware.sampleapp.persistence.OracleBatchingBatcherFactory</prop>
                <prop key="">none</prop>
                <prop key="hibernate.show_sql">true</prop>

and the transaction is applied as

@Transactional(isolation=Isolation.SERIALIZABLE, propagation=Propagation.REQUIRED)


We can extend HibernateJpaDialect to modify the connection instance before the transaction starts

import org.hibernate.Session;
import org.hibernate.jdbc.Work;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.datasource.DataSourceUtils;
import org.springframework.orm.jpa.vendor.HibernateJpaDialect;
import org.springframework.transaction.TransactionDefinition;
import org.springframework.transaction.TransactionException;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceException;
import java.sql.Connection;
import java.sql.SQLException;

public class HibernateExtendedJpaDialect extends HibernateJpaDialect {

    private Logger logger = LoggerFactory.getLogger(HibernateExtendedJpaDialect.class);

     * This method is overridden to set custom isolation levels on the connection
     * @param entityManager
     * @param definition
     * @return
     * @throws PersistenceException
     * @throws SQLException
     * @throws TransactionException
    public Object beginTransaction(final EntityManager entityManager,
            final TransactionDefinition definition) throws PersistenceException,
            SQLException, TransactionException {
        Session session = (Session) entityManager.getDelegate();
        if (definition.getTimeout() != TransactionDefinition.TIMEOUT_DEFAULT) {

        logger.debug("Transaction started");

        session.doWork(new Work() {

            public void execute(Connection connection) throws SQLException {
                logger.debug("The connection instance is {}", connection);
                logger.debug("The isolation level of the connection is {} and the isolation level set on the transaction is {}",
                        connection.getTransactionIsolation(), definition.getIsolationLevel());
                DataSourceUtils.prepareConnectionForTransaction(connection, definition);

        return prepareTransaction(entityManager, definition.isReadOnly(), definition.getName());


The above implementation takes care of setting the isolation level specified on the transaction definition by calling DataSourceUtils.prepareConnectionForTransaction(connection, definition);

Since the isolation level is changed on the connection instance, resetting it is important. This would have to be done in the data source since the connection gets closed as soon as the transaction is committed. There is no way to intercept the JpaTransactionManager code flow to reset the isolation level after commit is done.
By decorating the data source and setting the isolation level before the connection is given by the connection pool, one can achieve resetting of the isolation level on the connection instance to default (which in most of the cases is read committed).

JPA Query – getSingleResult() – Not a good API

JPA Query API has two methods to retrieve results from an select query.

1. getSingleResult() – Execute a SELECT query that returns a single untyped result.

2. getResultList() – Execute a SELECT query and return the query results as an untyped List.

The weird part of the getSingleResult() method is that it throws a NoResultException – if there is no result found. NoResultException is a runtime exception.
According to the EffectiveJava “Use checked exceptions for conditions from wich the caller can reasonably be expected to recover. Use runtime exceptions to indicate programming errors”. Getting no result from an select query is not a programming error.

The workaround to overcome this api is to use getResultList() method and check if the list is empty. getSingleResult() can only be used in situations where “I am totally sure that this record exists. Shoot me if it doesn’t

Smart DML Queries With Hibernate Collections

This post details about the insert, update and delete queries executed while working with list and set collections of basic objects (i.e. string, integer, long etc). When using JPA 2 Standard an entity containing an ElementCollection looks like

public class Employee {
  private long id;
  private List<String> phones;

The operations when the DML queries are executed for a List collection type are :

Deletes – executed when the size of the collection shrinks and when null values are added
Inserts – executed when the size of the collection grows
Updates – executed when the value or the order changes

The order in which these are executed is – Deletes -> Updates -> Inserts.

For e.g. If the list has values {“1”, “2”, “3”, “4”}

example 1 – modify the list by executing the below code


4 insert queries would be executed

example 2 – modify the list as follows

only 1 update query would be executed since there were already 4 elements in the list {“1”, “2”, “3”, “4”}.

For an attached entity if the list reference is changed, hibernate all the entries are deleted and re-inserted irrespective of whether any value changes are made to the list or not.

The methods getDeletes(), needsInserting() and needsUpdating() in org.hibernate.collection.PersistentList class can provide more details of the internal workings.

The number of queries fired vary slightly when Set is used. The set collection type is not update-able (i.e update queries are never executed). So in scenarios when the size of the set remains the same but the values change a series of delete and insert queries are executed.

This behavior of the Set indicates us to have a small optimization – Use a List (since its update-able) instead of a Set when your database can manage to avoid duplicate values.

Hibernate Prepared Statement Logging

To log the queries executed by hibernate, add a logger ‘org.hibernate.SQL’ with debug log level and to log the parameter values of the prepared statement add ‘org.hibernate.type’ with trace log level.

An sample trace would look like

Hibernate: insert into ComponentGroup (name, id) values (?, ?)
19:23:40,753 TRACE StringType:151 – binding ‘group1’ to parameter: 1
19:23:40,754 TRACE LongType:151 – binding ‘1’ to parameter: 2

Reference Hibernate Core Reference Guide

Hibernate Batching

Hibernate provides configuration properties to turn on batching of DML statements.
The properties are –
1.  hibernate.jdbc.batch_size
2.  hibernate.jdbc.batch_versioned_data

After setting the values for these two properties at the session factory level, hibernate batches the prepared statements and reduce the number of database hits.

A bit off topic – Hibernate mostly uses prepared statements to execute queries instead of normal statements to avoid SQL injection.

If your database server is Oracle and Oracle’s JDBC Driver is used for connecting to the database then there are few more steps required before hibernate batching is correctly enabled.

Oracle’s JDBC Driver supports two batching models – standard batching (JDBC 3.0) and oracle specific batching.  After setting the above two batching properties, hibernate uses the BatchingBatcher implementation to batch the executed prepared statements.  This implementation uses the JDBC 3.0 batching model which has a disadvantage when it comes to Oracle JDBC driver implementation. The Oracle JDBC driver implementation for standard batching does not return correct update count. The return value of preparedStmt.executeBatch() is an int array with all values set to -2 for a successful execution. The update count returned by the driver are important for handling StaleObjectException even if JPA standard is used.

A bit more info on why I mention JPA here – JPA provides a method called merge which should be used when your hibernate object needs to be updated. The merge implementation executes a select query before firing an update if the object is not already present in the hibernate cache. It then validates the hibernate version of the loaded and modified objects to check stale object scenarios. Hence an update query is not fired if the object to be updated is already stale.
Now the reason I mention that the update count is still important even if JPA is used was bulk object updates. When there many objects to be updated in the same transaction there could be a time difference between the execution of the select query and the update query

Back to the main topic of hibernate batching – Oracle JDBC Developer Guide recommends using the Oracle specific batching model implementation when the update counts are important. Using the oracle specific batching model requires our application to provide an implementation to hibernate’s Batcher interface or extending the AbstractBatcher class (which is more convenient) and BatcherFactory.  These are needed so that the application can make calls to the oracle specific batching model classes like OraclePreparedStatement.sendBatch() for e.g.

One can find a ready-to-use implementation of these classes here. A few points/differences from this implementation to consider are :
1.  Caching the expectations in an array as an instance variable is not needed. Maintaining a counter which is incremented every time in the addBatch() method when an BasicExpectation instance is encountered.
2. There are three types of Expectations in hibernate – None, Basic and Param.

  • None is mostly used for queries which effect more than one row (for e.g. deleting an element collection from an entity).
  • Basic is used for insert, update and delete queries which update only one row (hence I assume the expected row count is hard coded to 1).
  • Param is used for callable statements (stored procedures).

Out of the above three None and Basic are batchable. Since the number of rows which would be effected cannot be determined in a None Expectation an instanceOf check is needed for BasicExpectation as mentioned in point 1.
3. Add a new logger to log any statements instead of using the inherited one from AbstractBatcher. If the latter one is used you would have to turn on hibernate logging to view the log statements.

After providing implementation to these two interfaces/classes, the last thing you need to add is another configuration property to specify a fully qualified class name of the BatcherFactory. The property name is – hibernate.jdbc.factory_class.

Happy Batching!