Ensuring High Level of Performance with WebLogic JDBC Connection Pools


Written by Purushotham 

In this post, you will find some common best practices aimed at ensuring high levels of performance with WebLogic JDBC.
 
  • Use WebLogic DataSources for connection pooling of JDBC connections
Making a real DBMS connection is expensive and slow, so you should use our data sources to retain and re-use connections. The ideal mode for using pooled connections is to use them as quickly and briefly as possible, getting them just when needed, and closing them (returning them to the pool) as soon as possible. This maximizes concurrency. 
(It is crucial that the connection is a method-level object, not shared between application threads, and that it *is* closed, no matter what exit path the application code takes, else the pool could be leaked dry, all connections taken out and abandoned.
See a best-practices code example farther down in this article. The long-term hardening and integration of WebLogic Datasources with applications and other WebLogic APIs make them much the preferred choice to UCP or third-party options.)
 
  • Use Oracle JDBC thin driver (Type 4) rather than OCI driver (Type 2)
The Oracle JDBC thin driver is lightweight (easy to install and administrate), platform-independent (entirely written in Java), and provides slightly higher performance than the JDBC OCI (Oracle Call Interface) driver.  The thin driver does not require any additional software on the client side.  Oracle JDBC FAQ stipulates that the performance benefit with the thin driver is not consistent and that the OCI driver can even deliver better performances in some scenarios. Using OCI in WebLogic carries the danger that any bug in the native library can take down an entire WebLogic server.WebLogic officially no longer supports using the driver in the OCI mode.
 
  • Use PreparedStatements objects rather than plain Statements 
With PreparedStatements, the compiled SQL query plans will be kept in the DBMS cache, only parsed once and re-used thereafter.
 
  • Use/configure the WebLogic Datasource’s statement cache size wisely.
The datasource can actually cache and allow you to transparently re-use a Prepared/CallableStatement made from a given pooled connection. The pool’s statement cache size (default=10) determines how many. This may take some memory but is usually worth the performance gain. Note well though that the cache size is purged in the least recently used policy so if your app(s) that use a datasource typically make 30 distinct prepared statements, each next request would put the new one in the cache and kick out one used 10 statements ago, and this would thrash the cache, with no statement ever surviving long enough to be re-used. The console makes several statement cache statistics available to allow you to size the cache to service all your statements, but if memory becomes a huge issue, it may be better to set the cache size to zero. When using the Oracle JDBC driver, also consider using its statement caching as a lower-level alternative to WebLogic caching. There have been times when the driver uses significant memory per open statement, such as if cached by WebLogic, but if cached at the driver level instead, the driver knows it can share and minimize this memory. To use driver-level statement caching instead, make sure the WebLogic statement cache size is zero, and add these properties to the list of driver properties for the datasource: implicitCachingEnabled=true     and    maxStatements=XXX  where XXX is ideally a number of statements enough to cover all your common calls. Similarly to the WebLogic cache size, a too-small number might be useless or worse. Observe your memory usage after the server has run under full load for a while.
 
  • Close all JDBC resources ASAP, inline, and for safety, verify so in a finally block
This includes Lobs, ResultSets, Statements, and Connections objects to maximize memory and avoid certain DBMS-side resource issues.  By spec, the Connection.close() should close all sub-objects from it, and the WebLogic version of close() intends to do that while putting the actual connection back into the pool, but some objects may have different implementations in different drivers that won’t allow WebLogic to release everything. JDBC objects like Lobs not properly closed can lead to this error:
java.sql.SQLException: ORA-01000: maximum open cursors exceeded.
 
If you don't explicitly close Statements and ResultSets right away, cursors may accumulate and exceed the maximum number allowed in your DB before the Connection is closed. 
 
Here is a code example for WebLogic JDBC best practices:
 
Public void myTopLevelJDBCMethod() {
    Connection c = null; // defined as a method-level object, not accessible or kept where other threads can use it.
      … do all pre-JDBC stuff…
 
    // The try block, in which all JDBC for this method (and sub-methods) will be done
    Try {
      // Get the connection directly, fresh from a WLS datasource
      c = myDatasource.getConnection();
 
      … do all your JDBC… You can pass the connection to sub-methods, but they should not keep it,
      or expect it or any of the objects gotten from it to be open/viable after the end of the method…
      doMyJDBCSubTaskWith( c );
      
  c.close(); // close the connection as soon as all JDBC is done
  c = null;  // so the finally block knows it’s been closed if it was ever obtained.
 
      .. do whatever else that may remain that doesn’t need JDBC. I have seen *huge* concurrency improvements by
      closing the connection ASAP before doing any non-JDBC post-processing of the data etc.
    
    } catch (Exception e) {
      .. do what you want/need, if you need a catch-block, but *always* have the finally block:
    } finally {
      // If we got here somehow without closing c, do it now, without fail, as the first thing in the finally block so it always happens
      If (c != null) try {c.close();} catch (Exception ignore){}
      … do whatever else you want in the finally block
    }
}
 
  • Set The Datasource Shrink frequency to 0 for fastest connection availability
A datasource can be configured to vary its count of real connections, closing an unneeded portion (above the minimum capacity) when there is insufficient load currently, and it will repopulate itself as/when needed. This will impose slowness on apps during the uptick in load, while new replacement connections are made. By setting the shrink frequency to zero, the datasource will keep all working connections indefinitely, ready. This is sometimes a tradeoff in the DBMS, if there are too many idle sessions…
 
  • Set the datasource test frequency to something infrequent or zero
The datasource can be configured to periodically test any connections that are currently unused, idle in the pool, replacing bad ones, independently of any application load. This has some benefits, such as keeping the connections looking busy enough for firewalls and DBMSes that might otherwise silently kill them for inactivity. However, it is overhead in WLS, and is mostly superfluous if you have test-connections-on-reserve as you should.
 
  • Consider skipping the SQL-query connection test on reserve sometimes
You should always explicitly enable ‘test connections on reserve’ because even with Active GridLink information about DBMS health, individual connections may go bad, unnoticed. The only way to ensure a connection you’re getting is good is to have the datasource test it just before you get it. However, there may be cases where this connection test every time is too expensive, either because it adds too much time to the short user use-case, or it burdens the DBMS too much. In these cases, if it is somewhat tolerable that an application occasionally gets a bad connection, there is a datasource option ‘seconds to trust an idle connection’ (default 10 seconds) which means that if a connection in the pool has been tested successfully, or previously used by an application successfully, within that number of seconds, we will trust the connection, and give it to the requester without testing it. In a heavy-load, quick-turnover environment this can safely and completely avoid the explicit overhead of testing. For maximal safety however, set ‘seconds to trust an idle connection’ explicitly to zero.
 
  • Consider making the test as lightweight as possible
If the datasource’s ‘Test Table’ parameter is set, the pool will test a connection by doing a ‘select  count(*) from’ from that table. DUAL is the traditional choice for Oracle. There are options to use the JDBC isValid() call instead, which for *some* drivers is faster. When using the Oracle driver you can set the ‘test table’ to SQL ISVALID. The Oracle dbping() is an option, enabled by the ‘test table’ being set to SQL PINGDATABAS, which checks the to-DBMS net connectivity without actually invoking any user-level DBMS functionality. These are faster, but there are rare cases where the user session functionality is broken, even if the net connectivity is still good. For XA connections, there is a heavier tradeoff. A test table query will be done in its own XA transaction, which is more overhead, but this is useful sometimes because catches and works around some session state problems that would otherwise cause the next user XA transaction to fail. For maximal safety, do a quick real query, such as by setting the test table to SQL SELECT 1 FROM DUAL.
 
  • Pinned-to-Thread not recommended
Disabled by default, this option can improve performance by transparently assigning pool connections to specific WLS threads. This eliminates contention between threads while accessing a datasource.  However, this parameter should be used with great care because the connection pool maximum capacity is ignored when pinned-to-thread is enabled. Each thread (numbering possibly in the several hundreds) will need/get its own connection, and no shrinking can apply to that pool. That being said, pinned-to-thread is not recommended, for historical/trust reasons. It has not gotten the historical usage, testing, and hardening that the rest of WebLogic pooling has gotten.
 
  • Match the Maximum Thread Constraint property with the maximum capacity of database connections
This property (See Environment Work Manager in the console) will set a maximum number of possible concurrent application threads/executions. If your applications can run concurrently, unbounded in number except for this WebLogic limit, the maximum capacity of the datasource should match this thread-count so none of your application threads have to wait at an empty pool until some other thread returns a connection.
 
Visit Tuning Data Source Connection Pools and Tuning Data Sources for additional parameters tuning in JDBC data sources and connection pools to improve system performance with Weblogic Server, and Performance Tuning Your JDBC Application for application-specific design and configuration. 

Comments

Popular posts from this blog

Oracle Forms and Reports 12c step by step installation