Cannot load JDBC driver class ‘net.sourceforge.jtds.jdbc.Driver’

Cannot load JDBC driver class 'net.sourceforge.jtds.jdbc.Driver'This Blog shows you some possible causes when getting the Exception above using Eclipse with multiple installed Java Runtime Environments. For this tutorial we’ve been using ‘Spring Tool Suite Version: 3.1.0.RELEASE’, together with Apache Maven version 3.0.4 (r1232337; 2012-01-17 09:44:56+0100).

Our Situation

We’ve been working for a big project which involved a lot of data-loads and data-processing and was designed to work using SpringBatch from the Spring Framework. On some Windows 7 PCs, wehen starting JUnit tests which were initializing a Spring context within Eclipse (STS), the initialization screw up. When running the JUnit tests from the maven-console, everything was running fine. The exception thrown when trying to start JUnit tests within Eclipse read as follows:

java.lang.IllegalStateException: Failed to load ApplicationContext
at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:99)
at org.springframework.test.context.TestContext.getApplicationContext(TestContext.java:122)
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109)
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75)
at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:312)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:211)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:288)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:284)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:88)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jobRepository': Invocation of init method failed; nested exception is org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC driver class 'net.sourceforge.jtds.jdbc.Driver'
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1482)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:610)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:932)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:479)
at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:120)
at org.springframework.test.context.support.AbstractGenericContextLoader.loadContext(AbstractGenericContextLoader.java:60)
at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.delegateLoading(AbstractDelegatingSmartContextLoader.java:100)
at org.springframework.test.context.support.AbstractDelegatingSmartContextLoader.loadContext(AbstractDelegatingSmartContextLoader.java:248)
at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContextInternal(CacheAwareContextLoaderDelegate.java:64)
at org.springframework.test.context.CacheAwareContextLoaderDelegate.loadContext(CacheAwareContextLoaderDelegate.java:91)
... 25 more
Caused by: org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC driver class 'net.sourceforge.jtds.jdbc.Driver'
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:293)
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:320)
at org.springframework.batch.support.DatabaseType.fromMetaData(DatabaseType.java:93)
at org.springframework.batch.core.repository.support.JobRepositoryFactoryBean.afterPropertiesSet(JobRepositoryFactoryBean.java:145)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1541)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1479)
... 40 more
Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC driver class 'net.sourceforge.jtds.jdbc.Driver'
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80)
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:280)
... 45 more
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC driver class 'net.sourceforge.jtds.jdbc.Driver'
at org.apache.commons.dbcp.BasicDataSource.createConnectionFactory(BasicDataSource.java:1429)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1371)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77)
... 46 more
Caused by: java.lang.UnsupportedClassVersionError: net/sourceforge/jtds/jdbc/Driver : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at org.apache.commons.dbcp.BasicDataSource.createConnectionFactory(BasicDataSource.java:1415)
... 50 more

When looking carefully at the stack tace, you can find the following line saying something like:

Caused by: java.lang.UnsupportedClassVersionError: net/sourceforge/jtds/jdbc/Driver : Unsupported major.minor version 51.0

On StackOverflow as well as on Wikipedia we can find the the key to this somewhat cryptic message:

J2SE 8 = 52
J2SE 7 = 51
J2SE 6.0 = 50
J2SE 5.0 = 49
JDK 1.4 = 48
JDK 1.3 = 47
JDK 1.2 = 46
JDK 1.1 = 45

In other words: The JDBC driver-version from ‘net.sourceforge.jtds.jdbc.Driver’ we’re using is not compatible with Java runtime environments not being compatible with J2SE 7.

Analysis

After having checked the configured Java runtime environments within Eclipse (see ‘Installed JREs’ within Eclipse preferences pane), the execution environments (see ‘Execution Environments’ within Eclipse preferences pane), the configured Java version used by maven as well as the maven-compiler-plugin settings within our mother-pom, we found out the following:

JRE(s) EE/mapped JRE Maven JDK Mother POM source/target
Not Working 1.6.0_37 JavaSE-1.6 -> jdk1.6.0_37 1.7.0_21 1.6 / 1.6
Working 1.7.0_21 JavaSE-1.6 -> jdk1.7.0_21, JavaSE-1.7 -> jdk1.7.0_21 1.7.0_21 1.6 / 1.6

The problem was, that the JDBC driver requires code to be run in JREs compatible to 1.7 (major version 51). On PCs having only configured one single JRE of version 1.7.0_21, the problem vanished since here we have major version 51. For clients having older versions of JREs installed, Eclipse could not run the code in a 1.7 JRE, so initalization failed, because the JDBC-driver could not be loaded in a 1.6.x JRE. Even after configuring a 1.7.x JRE within Eclipse, things still did not work on the faulty machines. This time not because of lack of availability of the correct JRE but rather because the definitions in the mother POM, telling to compile for target JRE 1.6. Based on this, the best matching execution environment was chosen, which again turned out to be 1.6.0_37 rather than 1.7.0_21. And why did it work within Maven? Because Maven was configured to build and run with JDK 1.7.0_21,

C:\users\dropbit>mvn -version
Apache Maven 3.0.4 (r1232337; 2012-01-17 09:44:56+0100)
Maven home: C:\Program Files\SpringSource Tool Suite\apache-maven-3.0.4
Java version: 1.7.0_21, vendor: Oracle Corporation
Java home: C:\Program Files\Java\jdk1.7.0_21\jre
Default locale: de_CH, platform encoding: Cp1252
OS name: "windows 7", version: "6.1", arch: "amd64", family: "windows"

How to fix the Problem

To make the tests run even within Eclipse on the not working Windows clients, we would have trhee possible choices with descending preference:

  • fix the target java version within the mother-pom to be 1.7 instead of 1.6
  • configure the execution environment within Eclipse (see ‘Execution Environments’ within Eclipse preferences pane) to contain the setting: JavaSE-1.6 -> jdk1.7.0_21
  • remove configuration for the JRE 1.6.0_37 and only configure JRE 1.7.0_21

Of course the real mistake here was the mother-pom, which had once been configured to compile for 1.6  JREs, but that did not make sense anymore after having introduced a new JDBC-driver only being runnable within 1.7 JREs. So we corrected the invalid POM

        
          maven-compiler-plugin
          2.3.2

            1.6
            1.6
            UTF-8

        

to compile for the requested target version 1.7 instead:

        
          maven-compiler-plugin
          2.3.2

            1.7
            1.7
            UTF-8

        
Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Eclipse, Software Engineering

Data Security without the Cloud

Data Security without Cloud ServicesThis post shows a simple solution to backup your most valuable documents on a USB-Stick in an hidden and encrypted form, such that the data is not easily recognizable and accessible for unauthorized users. Users connecting the Stick would just see an ordinary USB-Stick and have access to this drive. But the main part of the Stick will be ‘under the hood’.

This tutorial is based on OS X but it can easily be transferrerd to work on any kind of linux or with some additional efforts even to work on a Windows {X} client. If you transfer the tutorial for another platform, please let me know, so we can provide your solution here as well.

As for the encryption-part we go for truecrypt. This is an OpenSource solution for transparent and realtime File-, Container- or Partition-Based encryption.

Preparation: Get your Copy of TrueCrypt

Go to the downlad setion of the TrueCrypt Website and download the most recent version of TrueCrypt, currently this is Version – 7.1a. Then install it.

Step 1: Prepare your USB-Stick

Get a USB-Stick with enough space for your valuable documents, say 64 GB. We will split up the partitions into a smaller visible partition of 16 GB and a bigger invisible partition of 48 GB. Of course you can choose the visible partition to be smaller. Keep in mind, that the visible partition serves you to transport ordinary non-valuable information and can be used as a normal USB-Stick. When connecting the Stick to your Mac, only the visible partition gets mounted. This is cool, because when your stick gets lost, chances are good that the finder doesn’t even realize that there is another partition.

To partition your stick in that manner, run the ‘Disk Utility’ (or ‘Festplatten-Dienstprogramm’ in german) and create two partitions. Select ‘2 Partitions’ from the Partition-Layout dropdown:

Bildschirmfoto 2014-03-23 um 20.59.07Then define name and size of the first and visible partition. Here we selected 16 GB for the first partition, 48 GB for the second one.

Bildschirmfoto 2014-03-23 um 20.59.11

At the moment you don’t have to care about the partition properties and fileformat chosen for the invisible 2nd partition. Just make sure the partition sizes fit your needs. You can set the size of one partition, the other’s will then be calculated by the ‘Disk-Utility’ by subtracting some administrative data-blocks from the space left. Choose an appropriate filesystem for both partitions, i.e. MS-DOS (FAT):

Bildschirmfoto 2014-03-23 um 21.06.22Then press the ‘Apply’ – button and your USB-Stick will be partitioned in the selected manner. Depending on the size of your stick, this may take several minutes.

Step 2: Create your Hidden Encrypted Device

In this step we use the OpenSource Encryption Utility TrueCrypt to make the 2nd partition a hidden encrypted partition. To start, connect your previously prepared USB-Stick and launch TrueCrypt.

  • In the start window choose ‘Create Volume’
  • Then select ‘create a volume within a partition / drive

Bildschirmfoto 2014-03-23 um 21.37.12

Click next. In the following dialog choose ‘Standard TrueCrypt Volume’ and click next. On the now appearing  ‘Volume Location’ dialog click ‘Select Device’ and you should see something as follows:

Bildschirmfoto 2014-03-23 um 21.49.34dChoose the bigger partition of the stick (CAUTION: Make sure you don’t choose an existing Partition of your physical harddrive or another external device!) and click ‘OK’. Click ‘Next’ and accept the Warning on creating encrypted partitions instead of encrypted files. Concerning the Encryption Options, we suggest to make the following selection:

Bildschirmfoto 2014-03-23 um 22.03.26After clicking ‘Next’ you’ll be prompted to input your password. Make sure, you note down your password or password-hints and put it into your safe or tell it to someone you trust. Choose a password with the length of at least 32 characters, containing uppercase / lowercase letters as well as digits and special-characters. Then click ‘Next’ and make your choice on the next question (Files > 4GB allowed or not). Make your selection and click ‘Next’. In the following Dialog choose the fileformat (i.e. FAT, don’t check the fast format option) and on the following dialog move your mouse to generate random data to format the partition.

You’ll then get presented a dialog indicating, that device has been successfully created. Click ‘OK’, then ‘Exit’ to finish the process. Congratulations! You now have your hidden, encrypted partition on your USB-Stick.

Step 3: Mount your Hidden Encrypted Device

When you insert the stick into your laptop, you should now see only one device icon, representing the unencrypted partition of your USB-Stick. Use this device to store or transport data, which is not confidential.

To get access to your newly created partition, start TrueCrypt and click ‘Select Device…’ from the main-dialog. Again select your partition from the USB-Stick you have just finished setting up and click ‘Mount’. You will get propmpted for the password. Enter the password and click ‘OK’. The main-window will then indicate the mounted partition while at the same time a new device-icon appears on the desktop. This is your hidden encrypted device to store your confidential data.

Bildschirmfoto 2014-03-23 um 22.42.40After having finished your work with your secured device, don’t forget to unmount it by ejecting it in the ‘Finder’ and / or using the ‘Dismount’ button in TrueCrypt.

Now you have a secured portable USB-Stick to make your data portable. Depending on your needs, you might want to use your stick as a portable backup device, rather than to make your data available everywhere and work with it. It should probably replace the cloud, since – as this post is about – you don’t want to trust your data to the cloud, no matter if it’s encrypted or not. If this is your situation, go read on with this post!

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Data Security

Cloud or no Cloud, that’s the question!

Cloud or no CloudThis post is about the question, wheter or not you should use cloud services in order to backup your data. Compared to traditional backup solutions using hard drives there are lots of advantages you win when going for a cloud solution. On the other hand your data is potentially available to anyone, when uploaded into the cloud. Before uploading your secrets to the cloud, to get unauthorized access to them, someone has to break into your appartment, crack the lock, trick your dog, fool your alarm system and get access to your PC. Or if you’re connected to the web from time to time, one would at least need to make you download a nice little trojan or connect to one, you’ve already installed to get root access to your machine. The situation is different, when you move your valuables to the cloud. From then on, your data is potentially reachable to anyone. For the matter of this discussion it’s irrelevant, wheter interested groups first need to hack your cloud password to have rights granted to reading your data or get it granted directly from governmental institutions ‘in the interests of national security’.

So the answer to the question discussed here (Cloud or no Cloud?), will be different for everyone among us and – if considered well – be dependant on the type of information and document to be stored.

While you probably don’t care when the NSA, Intelligence Services or Networks doing industrial espionage  scan trhough all your holiday pictures, you probably don’t want them to scan and intercept your not yet published dissertation, your secret documents about a new algorithm upon which you’re planning to build your company on or your preparations for an international call for proposal on and so forth. You see, to what these thoughts are leading:

1st step: Split up your Data into two Sets

The first thing to do is to split up your data into two disjoint sets of data:

The first set of data - let's call it Public Data - contains the data, everyone shall have access to and can be synched to a cloud service. I'll soon write a separate post to teach you possibilities, to make even this set of data a bit harder to be scanned or intercepted.

The second set of data - let's call it Private Data - is your valuable data, which you know, other persons, organizations i.e. competitors might be interested in and would pay a lot to get access to.

Now think of yor data and determine, what kind of files you would assign the label ‘Private Data’. If you don’t have information of that kind, you’re done and can skip the rest of this post. If you do, read on. The rest of this post provides a concept for dealing with your Private Data.

2nd step: Decide on how to backup your Private Data

First make sure, none of your Private Data is synched to a cloud service. If so, remove it. (So the public will at least not continue to have the most recent versions of your Private Data.) Now, you’re on your own with your data and you need a separate backup solution for this type of data. You have three options:

  1. Do your backup manually
    This is the simplest solution. Buy an external harddrive or a USB-stick and backup your Private Data regularly by dragging the documents onto the drive-icon.
  2. Use your own solution
    Write a script which does the work for you and synchs your Private Data to the external harddrive or USB-stick. Add some encryption sugar, where desired.
  3. Use a commercial Solution
    Use backup solutions available on your Operating System or get another commercial Backup Software you trust in. Make sure, you understand exactly, how it works and what files actually get backed up.

Solution 1 gets cumbersome with a lot of files and is error prone as well, so we don’t consider this solution. From the left options we follow option 2 since we want to have a free solution with best possible security. How this is done you can read in this post.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Data Security, Enterprises

Match multibyte Characters with Regex in JAVA

When working intensively with regular expressions in JAVA and other languages than english are involved, you are bound to meet unexpected behaviour, at least when trying to cope with it the first time. This post teaches you how to match multibyte Characters with Regex in JAVA.

In german we have the umlaute ‘ä, ö, ü’, which usually do not lead into trouble, since their codepoints are covered in the codpage active in our countries and so we can match these by writing regular expressions as expected. Everything works fine. However, the situation is different when trying to match the following name:

OLIVIER DE CROŸ

The task is to replace all occurrencies of ‘Ÿ’ by ‘Y’. Since the character is not available via keyboard, we can use the following Unicode Table to find the UTF-8 representation of the ‘Ÿ’-Character. Concerning JAVA, we can provide UTF-8 literals using the ‘\\uXXXX’-Notation, whereas XXXX is the 2-Byte-Code of the Character in HEX. When using the replaceAll – Method of the String Object which takes a regex as first argument, we can complete the task as follows:


		String input = "OLIVIER DE CROŸ";
		output = input.replaceAll("[\\u0178]", "Y");
		System.out.println("in: " + input + ", out: " + output);

This does the job in our situation and we’re seeing the following output:

in: OLIVIER DE CROŸ, out: OLIVIER DE CROY

However, we have to be careful and be aware of single- and double-codepoint encodings of our characters. Since for most of the accented characters have a single codepoint as well as a double-codepoint encoding, our version above only matches the single-codepoint encoding, since it compiles the regex ‘\u0178’. If also double-codepoint encodings of ‘Ÿ’ shall be replaced (which is what we want here), we’d have to compile the regex to be ‘Ÿ’, rather than ‘\u0178’. This is simply done as follows:


		String input = "OLIVIER DE CROŸ";
		output = input.replaceAll("[\u0178]", "Y");
		System.out.println("in: " + input + ", out: " + output);

Using thiy, you’ll get the following output:

in: OLIVIER DE CROŸ, out: OLIVIER DE CROY
Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Regular Expressions, Software Engineering

Sort String Tokens alphabetically in JAVA

This is an example to show a simple way of sorting String Tokens alphabetically in JAVA. We’re using the StringUtils helper-class provided by the Spring-Framework to realize this exmaple. Add the following dependency within your pom.xml to have StringUtils available:

		<dependency>
			<groupId>org.springframework</groupId>
			<artifactId>spring-core</artifactId>
			<version>3.0.6.RELEASE</version>
			<scope>compile</scope>
		</dependency>

The following source-code demonstrates sorting the String-Tokens. First we convert the String to an Array, split by blanks, using the tokenizeToStringArray method, then we convert it back to a String using the arrayToDelimitedString – method.

import java.util.Arrays;
import org.springframework.util.StringUtils;

public class TestCase {

	public static void main(String[] args) {
		String inputString = "Zenith Peter Alberta Sandra Mikka Thomas Cassandra Mike Leon";
		String[] tokenizeToStringArray = StringUtils.tokenizeToStringArray(inputString, " ");
		Arrays.sort(tokenizeToStringArray);
		String outputString = StringUtils.arrayToDelimitedString(tokenizeToStringArray, " ").trim();
		System.out.println(outputString);
	}

}

The output when running this little JAVA-Programm will be the following:

Alberta Cassandra Leon Mike Mikka Peter Sandra Thomas Zenith
Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Software Engineering, Spring Framework

Query to select top-n Records from each Group (TSQL)

Query to select top-n Records of each groupOften I come accross a task where I need to select the top-n records from each group from a given set of data, where each subgroup is defined by having a specific attribute in common. To demonstrate how this is done, we’ll walk you through the following steps. If you’d like to check the outcome on your own and have a working example at hand, we provide some sample data to wirte the queries against:

Preparation: Load Test-Data

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING OFF
GO
CREATE TABLE [dbo].[any_table](
	[event_date] [date] NULL,
	[some_id] [char](7) NOT NULL,
	[create_timestamp] [datetime] NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x09380B00 AS Date), N'0340001', CAST(0x0000A2C700D2145F AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x09380B00 AS Date), N'0340001', CAST(0x0000A2C700D2147F AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x09380B00 AS Date), N'0340001', CAST(0x0000A2C700D214ED AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x09380B00 AS Date), N'0340001', CAST(0x0000A2C700D21552 AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x0E380B00 AS Date), N'0340001', CAST(0x0000A2BA01138ED4 AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x0E380B00 AS Date), N'0340001', CAST(0x0000A2BA01138EEA AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x0E380B00 AS Date), N'0340008', CAST(0x0000A2BA01138EFD AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x0E380B00 AS Date), N'0340011', CAST(0x0000A2BA0115BFFD AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x15380B00 AS Date), N'0340001', CAST(0x0000A2C700D21573 AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x15380B00 AS Date), N'0340001', CAST(0x0000A2C700D215B5 AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x1C380B00 AS Date), N'0340001', CAST(0x0000A2C700D215E6 AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x1C380B00 AS Date), N'0340001', CAST(0x0000A2C700D2162B AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x1C380B00 AS Date), N'0340001', CAST(0x0000A2C700D21631 AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x1E380B00 AS Date), N'0340001', CAST(0x0000A2C700D2168F AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x1E380B00 AS Date), N'0340001', CAST(0x0000A2C700D216C2 AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x22380B00 AS Date), N'0340001', CAST(0x0000A2C700D216E0 AS DateTime))
INSERT [dbo].[any_table] ([event_date], [some_id], [create_timestamp]) VALUES (CAST(0x22380B00 AS Date), N'0340001', CAST(0x0000A2C700D2171E AS DateTime))
GO

The following screenshot shows the test-dataset provided by the script above.

Sample Data

Sample Data provided by the script

The following screenshot shows the result we’d like to have: For each event-date we want to have the top two most recent records and see their ‘some_id’.

Desired Result

Desired Result Set: Show top 2 most recent Records per event_date

Next we build up our final query step by step:

1)  Select the required Data
In this step you decide on what data you want to select. Since in our example at the end we want to see all columns available, we select all attributes the table provides:

SELECT event_date,
       some_id,
       create_timestamp
FROM   any_table

2) Sort the data as required
Since for a given event_date we’re interested in the most recently created records, we sort by event_date desc and create_timestamp desc :

SELECT event_date,
       some_id,
       create_timestamp
FROM   any_table
ORDER  BY event_date DESC,
          create_timestamp DESC

Which gives us the following result:

Correctly sorted Sample Data

Correctly sorted Sample Data

3) Apply your ‘magic rownumber’
Now we add a rownumber which provides ascending numbers to every record based on the sortorder defined above, starting by 1 within every single event_date. For this purpose just add the Row_number() and bind it onto a partition over the same sortorder defined before:

SELECT event_date,
       some_id,
       create_timestamp,
       Row_number()
         OVER (
           partition BY event_date
           ORDER BY event_date DESC, create_timestamp DESC ) row_id
FROM   any_table

As you can see, the partition by clause contains one attribute less than the order by clause. This is exactly for the purpose of the numbering. Within the range of all equal partitions, the timestamp sorted in descending order defines the rownumber within that segment. With this query you’ll get the following result:

Sorted Set with "Magic Rownumber"

Your sorted set with the “Magic Rownumber”

 

 

4) Now select desired records
Now that you have your row_id, select the records you’re interested in. Since we only want to have the top 2 records for each event_date, we limit the row_id to be <=2 and use the query from the last step as the inner query, the one from the 2nd step as the outer query:

SELECT rowed_set.event_date,
       rowed_set.some_id,
       rowed_set.create_timestamp
FROM   (SELECT event_date,
               some_id,
               create_timestamp,
               Row_number()
                 OVER (
                   partition BY event_date
                   ORDER BY event_date DESC, create_timestamp DESC ) row_id
        FROM   any_table) rowed_set
WHERE  rowed_set.row_id <= 2
ORDER  BY rowed_set.event_date DESC,
          rowed_set.create_timestamp DESC

Or if you prefer solutions using common table expressions (CTEs), the same goal can be reached by the following script, which is a bit easier to read:

 ;WITH colrecs
     AS (SELECT event_date,
                some_id,
                create_timestamp,
                Row_number()
                  OVER (
                    partition BY event_date
                    ORDER BY event_date DESC, create_timestamp DESC ) row_id
         FROM   any_table)

SELECT event_date,
       some_id,
       create_timestamp
FROM   colrecs
WHERE  row_id <= 2
ORDER  BY event_date DESC,
          create_timestamp DESC

Congrats. You’re done.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Tagged with: , , , ,
Posted in Databases, TSQL (MSSQL)

Apache Karaf 3.0.0 bundle:watch

How to develop with Apache Karaf 3.0.0 and Maven without wasting a lot of time? Apache Karaf provides the command

bundle:watch

With this command you can define maven url’s which are be observed by Karaf after you have installed a bundle.

bundle:install -s mvn:ch.dropbit.test/cxf-rest-test/0.0.1-SNAPSHOT
bundle:watch mvn:ch.dropbit.test/cxf-rest-test/0.0.1-SNAPSHOT

Afterwards each maven build (mvn clean install) triggers a cxf-rest-test bundle reload.
If you want to remove an observation use something like:

bundle:watch --remove mvn:ch.dropbit.test/cxf-rest-test/0.0.1-SNAPSHOT

For more information see bundle:watch

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Software Engineering

Apache Karaf 3.0.0 CXF 2.7.10 Rest Example

In this tutorial we want to show you, how you can set up an Apache Karaf 3.0.0 and deploy a very simple rest service with Apache CXF in it. If you have any question, don’t hesitate to contact us here.

1. Download apache-karaf-3.0.0.zip from here Karaf3.0.0
2. Unzip apache-karaf-3.0.0.zip
3. Open a console and go to the directory %karaf_home%\bin
4. Start karaf by entering karaf and then press the enter key
karaf
5. Add the cxf feature repository, after this step your karaf is able to install the cxf feature.

karaf@root()> feature:repo-add cxf 2.7.10
Adding feature url mvn:org.apache.cxf.karaf/apache-cxf/2.7.10/xml/features

6. Check your feature repositories for the entry cxf-2.7.10

karaf@root()> feature:repo-list
Repository              | URL
-------------------------------------------------------------------------------------
standard-3.0.0          | mvn:org.apache.karaf.features/standard/3.0.0/xml/features
org.ops4j.pax.web-3.0.5 | mvn:org.ops4j.pax.web/pax-web-features/3.0.5/xml/features
cxf-2.7.10              | mvn:org.apache.cxf.karaf/apache-cxf/2.7.10/xml/features
enterprise-3.0.0        | mvn:org.apache.karaf.features/enterprise/3.0.0/xml/features
spring-3.0.0            | mvn:org.apache.karaf.features/spring/3.0.0/xml/features

7. If you can see cxf-2.7.10 in the list, then you are able to install the cxf feature

feature:install cxf

8. Check the cxf installation

feature:list | grep cxf

check-cxf
9. Let’s have a look which OSGI bundles are installed so far with the command bundle:list, because we installed the CXF Feature we should see the bundle “Apache CXF Compatibility Bundle Jar” now.

karaf@root()> bundle:list
START LEVEL 100 , List Threshold: 50
 ID | State  | Lvl | Version | Name
------------------------------------------------------------------
175 | Active |  50 | 2.7.10  | Apache CXF Compatibility Bundle Jar

10. Let’s have a look if there are installed any cxf endpoints in karaf so far with the command cxf:list-endpoints.

karaf@root()> cxf:list-endpoints
Name        State      Address      BusID

As you can see no cxf endpoint is defined. In the tutorial attachment cxf-rest-test you can find a maven project which creates an OSGI Bundle with a cxf endpoint. The Endpoint can be reached with

http://localhost:8181/cxf/dropbit/rest/hello/Tom

after you have installed the “cxf-rest-test” bundle in your karaf. Notice, Tom is just a rest parameter which will be returned in the http response, you can change it to your name if you like. Before we deploy the bundle to karaf have a look at the contained source files.

11. See the pom.xml below and notice the dependency element scope with value provided because the cxf feature is installed in the karaf itself we don’t need to append the cxf dependencies in our bundle.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 <modelVersion>4.0.0</modelVersion>
 <groupId>ch.dropbit.test</groupId>
 <artifactId>cxf-rest-test</artifactId>
 <version>0.0.1-SNAPSHOT</version>
 <packaging>bundle</packaging>

 <properties>
  <cxf.version>2.7.10</cxf.version>
 </properties>

 <dependencies>
  <dependency>
   <groupId>org.osgi</groupId>
   <artifactId>org.osgi.core</artifactId>
   <version>4.3.1</version>
   <scope>provided</scope>
  </dependency>
  <dependency>
   <groupId>org.apache.cxf</groupId>
   <artifactId>cxf-rt-frontend-jaxrs</artifactId>
   <version>${cxf.version}</version>
   <scope>provided</scope>
  </dependency>
  <dependency>
   <groupId>org.apache.cxf</groupId>
   <artifactId>cxf-rt-frontend-jaxws</artifactId>
   <version>${cxf.version}</version>
   <scope>provided</scope>
  </dependency>
  <dependency>
   <groupId>org.apache.cxf</groupId>
   <artifactId>cxf-rt-transports-http</artifactId>
   <version>${cxf.version}</version>
   <scope>provided</scope>
  </dependency>
  <dependency>
   <groupId>org.apache.cxf</groupId>
   <artifactId>cxf-rt-transports-http-jetty</artifactId>
   <version>${cxf.version}</version>
   <scope>provided</scope>
  </dependency>
 </dependencies>

 <build>
  <plugins>
   <plugin>
    <groupId>org.apache.felix</groupId>
    <artifactId>maven-bundle-plugin</artifactId>
    <version>2.3.7</version>
    <extensions>true</extensions>
    <configuration>
     <instructions>
      <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
      <Bundle-Version>${project.version}</Bundle-Version>
      <Bundle-Activator>ch.dropbit.test.RestActivator</Bundle-Activator>
      <Import-Package>*</Import-Package>
     </instructions>
    </configuration>
  </plugin>
 </plugins>
</build>
</project>

12. Apache Karaf uses the the OSGI Implementation Apache Aries. The blueprint context located in src\main\resources\OSGI-INF\blueprint\rest.xml simply defines a cxf rest endpoint.

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
xmlns:jaxws="http://cxf.apache.org/blueprint/jaxws"
xmlns:jaxrs="http://cxf.apache.org/blueprint/jaxrs"
xmlns:cxf="http://cxf.apache.org/blueprint/core"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://cxf.apache.org/blueprint/jaxws http://cxf.apache.org/schemas/blueprint/jaxws.xsd
http://cxf.apache.org/blueprint/jaxrs http://cxf.apache.org/schemas/blueprint/jaxrs.xsd
http://cxf.apache.org/blueprint/core http://cxf.apache.org/schemas/blueprint/core.xsd">
 <jaxrs:server address="/dropbit" id="someRestService">
  <jaxrs:serviceBeans>
   <ref component-id="restServiceImpl"/>
  </jaxrs:serviceBeans>
 </jaxrs:server>

 <bean id="restServiceImpl" />

</blueprint>

13. The File RestService.java contains the JAX-RS Annotations which defines our simple rest service.

package ch.dropbit.test;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;

@Path("rest")
public interface RestService {

    @GET
    @Path("hello/{name}")
    public String handleGet(@PathParam("name") String name);

}

14. The file RestServiceImpl.java contains the rest service implementation.

package ch.dropbit.test;

public class RestServiceImpl implements RestService {

	public String handleGet(String name) {
		return String.format("Hi %s, Karaf and CXF is cool.", name);
	}
}

15. An OSGI Bundle needs an Activator by definition, in our example the file is called RestActivator.java. The RestActivator is referenced by the pom.xml, in our example RestActivator does actually nothing but a simple print. More important is the blueprint context (rest.xml) which is loaded automatically by karaf.

package ch.dropbit.test;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;

public class RestActivator implements BundleActivator {

    public void start(BundleContext context) {
        System.out.println("Starting the bundle");
    }

    public void stop(BundleContext context) {
        System.out.println("Stopping the bundle");
    }
}

16. How to install the cxf-rest-test bundle in your karaf? This is simple, karaf is able to read mvn url’s, just type:

mvn clean install

in your cxf-rest-test project, where the pom.xml is located on the console, then change to the karaf console and type

karaf@root()> bundle:install -s mvn:ch.dropbit.test/cxf-rest-test/0.0.1-SNAPSHOT

17. Check the newly deployed cxf-rest-test bundle by entering

karaf@root()> bundle:list

now you should see something like:
cxf-rest-test
18. Check the cxf endpoint contained by the cxf-rest-test bundle

karaf@root()> cxf:list-endpoints

check-endpoing
19. If everything works you can access

http://localhost:8181/cxf/dropbit/rest/hello/Hugo

20. During development time you often have to reinstall a bundle in karaf to see your recent changes. This would be possible with

karaf@root()> bundle:uninstall cxf-rest-test
karaf@root()> bundle:install -s mvn:ch.dropbit.test/cxf-rest-test/0.0.1-SNAPSHOT

Because that is cumbersome karaf provides the command bundle:watch.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Software Engineering