Initialize Log4J in a web application with a ServletContextListener

While LogJj will self-initialize if it finds a file called log4j.properties or log4j.xml in the classpath, it will only read the file once at application startup.

I’ve often had the need to be able to change the logging level in a long-running application, such as a web application, so that I can switch logging to debug level as needed without requiring a restart.

Log4J provides configurator classes that will monitor a configuration file for changes and reload the configuration when the file changes. The implementation appears to spawn a background thread that sleeps for a specified interval, then wakes up and checks the configuration file for changes.

You can use this method by calling PropertyConfigurator.contigureAndWatch() or DOMConfigurator.configureAndWatch() methods. The challenge is that you should only call these methods once when your application starts up as each call to the above methods will result in a new thread to monitor the configuration file.

In many applications, it is a simple matter to call these methods only once. The question often comes up as to where to make the call to a ConfigureAndWatch method from within a web application. A collegue of mine once accidentally put this call in the main execution path of a high-traffic web application and brought the site down as the web application container couldn’t manage all the resulting threads.

I’ve seen several methods of ensuring that the ConfigureAndWatch method is only called once from within a web application, but I think the cleanest and simplest is to create a Context Listener to initialize Log4J.

The following example creates a ServletContextListener which will initialize Log4J when the application is loaded by the container, and will shut down logging for the application when it is unloaded. Continue reading Initialize Log4J in a web application with a ServletContextListener

Finding Items in the Classloader

I recently had a project in which I needed to implement a plug-in framework to provide extensibility.

My goal was to have plugins that are automatically identified by the application so that the simple act of adding the plugin jar file to the application’s classpath would allow the application to pick up and register the plugin.

I’m sure this isn’t rocket science, but I didn’t find any examples on the web.

In order to make this work, I implemented a standard for my application plugins. Each plugin jar file must contain a META-INF/plugins.xml file which defines the plugins contained in the jar. I won’t get into the details here. The plugins.xml file is read by the plugin manager which handles registration of plugins and serves as a factory.

The trick was in identifying all of the various plugins.xml files on the classpath. In order to do this, I wrote the following code: Continue reading Finding Items in the Classloader

Managing JDBC Database Connectivity

The single largest source of bugs in my shop lies around managing database connectivity in applications.

Now, my shop is still in the dark ages in that we typically don’t use any kind of persistence framework. Instead, we code our SQL by hand into our classes.

When working with a database in this way, you’ll need to manage several different objects, making sure to properly close each of them in turn or you’ll end up with resource leaks. This is especially true if you are using a connection pool to manage your database connections.

The objects that you need to manage are:

  • Connection
  • Statement
  • ResultSet

You’ll open each of these in turn, and each must be closed out when you are finished with it. Each of these objects has a close() method on it for just this purpose.

It’s common for me to come across the following code during code reviews:

DataSource ds = (Some DataSource here)
try{
   Connection c = ds.getConnection();
   Statement s = c.createStatement();
   ResultSet r = s.executeQuery("SELECT * FROM DUAL");

//Code to process the query results

   r.close();
   s.close();
   c.close();
}
catch(SQLException e)
{
   // Log the error
}

At first glance, this may look correct. However, we have to think about what happens if an exception is thrown before one or all of the database objects are closed.

A better solution is to close out the database objects in a finally block which is guaranteed to run:

DataSource ds = (Some DataSource here)
try{
   Connection c = ds.getConnection();
   Statement s = c.createStatement();
   ResultSet r = s.executeQuery("SELECT * FROM DUAL");

//Code to process the query results
}
catch(SQLException e)
{
   // Log the error
}
finally
{
   r.close();
   s.close();
   c.close();
}

Of course, this is still problematic as any of the close methods can also throw a SQLException. We can get a little crazy and nest some additional try / catch blocks inside our finally block to be certain that we close out everything:

...
finally
{
   try
   {
      r.close();
   }
   catch(SQLException e)
   {
      // Log Exception
   }
   //use a finally here in case some other RunTime exception
   //is thrown in the previous close statement
   finally   {
      try
      {
         s.close();
      }
      catch(SQLException e)
      {
         // Log Exception
      }
      //use a finally here in case some other RunTime exception
      //is thrown in the previous close statement
      finally
      {
         try
         {
            c.close();
         }
         catch(SQLException e)
         {
            // Log Exception
         }
      }
   }
}

The above will ensure that all database objects are closed.

Until my shop moves to embrace a persistence framework or ORDB mapping solution such as TopLink or Hibernate, I am stuck using the above method.

One alternative that I’ve begun to endorse is to use the Commons-DBUtils library from the Jakarta Commons project. This API allows you to pass in a DataSource, a Query, and a Query Processor. You never touch connections, statements, or result sets. As such, the API can ensure that all objects are closed correctly. This removes some of the tedium of doing direct JDBC calls from within your code and helps to avoid resource leaks by ensuring that all DB resources are properly managed.