A very typical use case for SQL logging is to troubleshoot various database related issues during development.
For this purpose, the P6Spy is a useful library to log all SQL statement and parameter values before send it to database. The P6Spy is free, it’s use to intercepts and logs all your database SQL statements into a log file, and it works for any application that uses JDBC driver.
If your application uses a DataSource, simply wrap your current DataSource object with P6DataSource. P6DataSource has a constructor method that accepts the DataSource to wrap. This is by far the simplest method especially if you use a dependency injection framework such as Spring or Guice.
If your application obtains connections from DriverManager, simply modify your JDBC connection URL to include p6spy: . For example, if your URL is jdbc:mysql://host/db then just change it to jdbc:p6spy:mysql://host/db . P6Spy implements the JDBC 4.0 API allowing automatic registration of our JDBC driver with DriverManager. As long as your application obtains connections through DriverManager, you only need to modify your database connection URL to activate P6Spy.
Pentaho can be configured to leverage on the P6Spy logging capability as well.
To demonstrate its use, let's configure the P6Spy to log all the Sql queries passing thru the SampleData datasource.
1. Download the P6Spy .zip file and unzip it.
2. Copy into the pentaho-server/tomcat/webapps/pentaho/WEB-INF/lib folder the p6spy-x.x.x-log4j-nodep.jar and p6spy-x.x.x.jar files.
3. Edit the pentaho-server/tomcat/webapps/pentaho/WEB-INF/classes/log4j.xml file and append at the end of the file, before the tag </log4j:configuration>, the following configuration:
<!-- ========================================================= --> <!-- Special Log File specifically for p6spy --> <!-- ========================================================= --> <appender name="SPYLOG" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="../logs/spy.log"/> <param name="Append" value="false"/> <param name="MaxFileSize" value="500KB"/> <param name="MaxBackupIndex" value="1"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d %-5p [%c] %m%n"/> </layout> </appender> <category name="p6spy"> <priority value="DEBUG"/> <appender-ref ref="SPYLOG"/> </category>
NOTE: If the spy.log file is not created after the start of the Pentaho server, update the File param with the absolute path.
4. Create the pentaho-server/tomcat/lib/spy.properties file and insert the following configuration as its content:
5. Edit the pentaho-server/tomcat/webapps/pentaho/META-INF/context.xml and add the following configuration for the new P6Spy datasource:
<Resource name="jdbc/DSspy" auth="Container" type="javax.sql.DataSource" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" maxActive="20" minIdle="0" maxIdle="5" initialSize="0" maxWait="10000" username="pentaho_admin" password="password" driverClassName="com.p6spy.engine.spy.P6SpyDriver" url="jdbc:p6spy:hsqldb:hsql://localhost:9001/SampleData" validationQuery="select count(*) from INFORMATION_SCHEMA.SYSTEM_SEQUENCES"/>
6. Restart the Pentaho server.
7. Logs in with Admin in the PUC and click on the Manage Data Sources button.
8. Edit the datasource SampleData/JDBC.
9. Change the Access type from Native(JDBC) to JNDI, and set the JNDI Name to DSspy and Test the connection.
10. Test it. Open the CTools Dashboard under the Public/Steel Wheels folder and/or create a new Analysis Report to use the SteelWheelsSales cube.
11. Verify all the SQL queries being logged in the spy.log file.
current time- the current time is obtained through System.getCurrentTimeMillis() and represents the number of milliseconds that have passed since January 1, 1970 00:00:00.000 GMT. (Refer to the J2SE documentation for further details on System.getCurrentTimeMillis().) To change the format, use the dateformat property described in Common Property File Settings.
execution time- the time it takes in milliseconds for a particular method to execute. (This is not the total cost for the SQL statement.) For example, a statement
SELECT * FROM MYTABLE WHERE THISCOL = ?might be executed as a prepared statement, in which the .execute() function will be measured. This is recorded as the statement category. Further, as you call .next() on the ResultSet, each .next() call is recorded in the result category.
category- You can manage your log by including and excluding categories, which is described in Common Property File Settings.
connection id- Indicates the connection on which the activity was logged. The connection id is a sequentially generated identifier.
statement SQL string- This is the SQL string passed to the statement object. If it is a prepared statement, it is the prepared statement that existed prior to the parameters being set. To see the complete statement, refer to effective SQL string.
effective SQL string- If you are not using a prepared statement, this contains no value. Otherwise, it fills in the values of the Prepared Statement so you can see the effective SQL statement that is passed to the database. Of course, the database still sees the prepared statement, but this string is a convenient way to see the actual values being sent to the database.
- For a production environment, check out the impact on the performance.
- SQL queries will not be logged if the data was already loaded in the cache.
- No tests was done for scheduled tasks.
For multi-tenant environment you might have the need to have the logged user on the log file as well. For that, you can also supply your own log message formatter to customize the format, including new data on it. Simply create a class which implements the
com.p6spy.engine.spy.appender.MessageFormattingStrategy interface and place it on the classpath.
Customising the Log Message
1. Copy into the pentaho-server/tomcat/webapps/pentaho/WEB-INF/lib folder the attached P6SpyCustomMessage-1.0.0.jar file.
2. Append the to pentaho-server/tomcat/lib/spy.properties file following configuration:
3. Restart the Pentaho Server.
4. Execute again the Reports/Dashboards and check out the new log message.