Monthly Archives: March 2011
In my previous post I listed out ways to configure logback in glassfish v3. After researching out and understanding logback more I realized that there is a simpler way.
Logback searches for the logback configuration file (first logback.groovy then logback-test.xml and then logback.xml if the former one is not found) in the classpath. Hence one can just place the logback.xml in the deployed application war and that’s it. No jvm-option called -Dlogback.configurationFile is needed.
If there is no logback.xml found in the classpath, logback configures itself using BasicConfigurator which will cause the logging output to be directed to the console. The minimal configuration involves a consoleAppender attached to the root logger which has a default log level of DEBUG. In glassfish v3, anything logged to the Console i.e. System.out or System.error is sent to the log file via a logger named “javax.enterprise.system.std.com.sun.enterprise.v3.services.impl”. A message to System.err is logged at Level.SEVERE. A message to System.out is logged at Level.INFO.
See the last logger in the below image.
The article provides details on how to integrate custom logging framework like slf4j and logback in glassfish v3. The steps required to achieve this are
1. Add a jvm option
from the admin console i.e. server-config -> JVM Settings -> JVM Options
2. Place your logback.xml file under <domain-directory>/config.
The logs would be created relative to <domain-directory>/config.
At times one might want to share local code with his/her team members instead of committing them. This can be done using the Tortoise SVN Patch feature. A patch is a text file that contains the alterations that were made to a specific source file. It includes the lines that have been removed and the lines that have been added.
Developers can then create a patch and share this file with their team members. The team member can just do right click and select “Apply Patch” option on the same module folder on which “Create Patch” was done. After selecting the patch file all the local code changes get applied.
Glassfish uses java util logging and below are two ways through which one can add new loggers for their application or third party api’s.
1. Set log-levels command
By executing the ‘asadmin set-log-levels org.springframework=FINE’ command a new logger for springframework is added. The logger gets added as soon as the command executes successfully.
2. Modifying the logging.properties
By modifying the logging.properties present in the domain/config folder as –
The logger gets added as soon as the properties file is saved.
No glassfish server restarts are required in any of the above approaches. The log statements can be seen in the same old server.log file. The newly added logger is visible in the admin console under Configurations->server-config->Logger Settings->Log levels. Any further modifications to the log levels can be done through this admin console UI.
To log the queries executed by hibernate, add a logger ‘org.hibernate.SQL’ with debug log level and to log the parameter values of the prepared statement add ‘org.hibernate.type’ with trace log level.
An sample trace would look like
Hibernate: insert into ComponentGroup (name, id) values (?, ?)
19:23:40,753 TRACE StringType:151 – binding ‘group1’ to parameter: 1
19:23:40,754 TRACE LongType:151 – binding ‘1’ to parameter: 2
Reference Hibernate Core Reference Guide
Spring provides a test annotation called @DirtiesContext which is very helpful when testing singleton beans. When added at the class level with the class mode set to AFTER_EACH_TEST_METHOD, spring creates a new application context for every test case (method). This annotation comes to your rescue when you have some state in your singleton class.
If you are using the Oracle’s OCI JDBC Driver to connect to the database, you might what to turn on logging/tracing to know what’s happening behind the scenes. Since the Oracle’s OCI JDBC driver internally calls OCI (Oracle Call Interface) there are two steps involved to turn on tracing.
1. Enabling the OCI JDBC driver logs
– To enable the driver logging, a different ojdbc jar needs to be used(i.e. the debug jar – ojdbc_g.jar). The new jar would have the detailed log statements. In addition to this one needs to add two system properties namely
- -Doracle.jdbc.Trace=true and
This oracle documentation link gives an detailed insight on the steps to enable driver logs
2. Tracing the OCI dll function calls
– The OCI JDBC driver calls the oci dll to interact with the oracle server. To know what dll function calls are made use flextracer (evaluation version :)).
Hibernate provides configuration properties to turn on batching of DML statements.
The properties are –
After setting the values for these two properties at the session factory level, hibernate batches the prepared statements and reduce the number of database hits.
A bit off topic – Hibernate mostly uses prepared statements to execute queries instead of normal statements to avoid SQL injection.
If your database server is Oracle and Oracle’s JDBC Driver is used for connecting to the database then there are few more steps required before hibernate batching is correctly enabled.
Oracle’s JDBC Driver supports two batching models – standard batching (JDBC 3.0) and oracle specific batching. After setting the above two batching properties, hibernate uses the BatchingBatcher implementation to batch the executed prepared statements. This implementation uses the JDBC 3.0 batching model which has a disadvantage when it comes to Oracle JDBC driver implementation. The Oracle JDBC driver implementation for standard batching does not return correct update count. The return value of preparedStmt.executeBatch() is an int array with all values set to -2 for a successful execution. The update count returned by the driver are important for handling StaleObjectException even if JPA standard is used.
A bit more info on why I mention JPA here – JPA provides a method called merge which should be used when your hibernate object needs to be updated. The merge implementation executes a select query before firing an update if the object is not already present in the hibernate cache. It then validates the hibernate version of the loaded and modified objects to check stale object scenarios. Hence an update query is not fired if the object to be updated is already stale.
Now the reason I mention that the update count is still important even if JPA is used was bulk object updates. When there many objects to be updated in the same transaction there could be a time difference between the execution of the select query and the update query
Back to the main topic of hibernate batching – Oracle JDBC Developer Guide recommends using the Oracle specific batching model implementation when the update counts are important. Using the oracle specific batching model requires our application to provide an implementation to hibernate’s Batcher interface or extending the AbstractBatcher class (which is more convenient) and BatcherFactory. These are needed so that the application can make calls to the oracle specific batching model classes like OraclePreparedStatement.sendBatch() for e.g.
One can find a ready-to-use implementation of these classes here. A few points/differences from this implementation to consider are :
1. Caching the expectations in an array as an instance variable is not needed. Maintaining a counter which is incremented every time in the addBatch() method when an BasicExpectation instance is encountered.
2. There are three types of Expectations in hibernate – None, Basic and Param.
- None is mostly used for queries which effect more than one row (for e.g. deleting an element collection from an entity).
- Basic is used for insert, update and delete queries which update only one row (hence I assume the expected row count is hard coded to 1).
- Param is used for callable statements (stored procedures).
Out of the above three None and Basic are batchable. Since the number of rows which would be effected cannot be determined in a None Expectation an instanceOf check is needed for BasicExpectation as mentioned in point 1.
3. Add a new logger to log any statements instead of using the inherited one from AbstractBatcher. If the latter one is used you would have to turn on hibernate logging to view the log statements.
After providing implementation to these two interfaces/classes, the last thing you need to add is another configuration property to specify a fully qualified class name of the BatcherFactory. The property name is – hibernate.jdbc.factory_class.