Let's start with an example. As you may know, Java I/O system uses checked exceptions, with java.io.IOException being the great grandfather of all. Any I/O system is error prone, so you can get all kinds of conditions. However, there are cases when I/O exceptions are never expected to happen. One of these cases concerns static classpath resources. Static classpath resources could be bundled icons, properties, translations, etc. The important thing here is that a static resource is an inseparable part of the application. It is loaded from the same location from which your application classes are loaded. If a classpath resource is missing or is corrupted, then the application is corrupted and may not run correctly. The effect of a missing resource is the same as having one of your jar or class files missing or corrupted. Now, we know that an attempt to initialize a class that is not in the classpath will result in a NoClassDefFoundError. Errors in Java are unchecked exceptions. This was a good decision. Why? Because if an application class is missing in the classpath then the application has not been properly built or initialized and there is nothing the application can do to reliably recover from such a condition. Consequently you never write unit or integration tests that test for those kind of conditions. The same applies to bundled static classpath resources. However, when loading static resources you have to do I/O manually and handle all the checked exceptions in your application code. And this is where your code testability falls. A code sample is worth a thousand words, so let's dive into the code and demonstrate.
Suppose you have a classpath resource example/AppProperties.properties with the following contents:
greeting=Hello, World!
A class example/AppProperties.java that loads the resource from the class path:
package example;
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
public class AppProperties {
public String getGreeting() {
Properties props = new Properties();
try {
InputStream inStream =
AppProperties.class.getResourceAsStream("AppProperties.properties");
props.load(inStream);
inStream.close();
} catch (IOException e) {
// What to do here?
throw new RuntimeException("This should never have happened!", e);
}
return props.getProperty("greeting");
}
}
And a test case for this class example/AppTest.java:
package example;
import junit.framework.Assert;
import org.junit.Test;
public class AppTest {
@Test
public void testAppProperties() {
Assert.assertEquals("Hello, World!",
new AppProperties().getGreeting());
}
}
In my opinion a simple class like AppProperties should have 100% code line coverage by tests. Let's see what we get when running with a code coverage tool (I used Eclemma):
[+] Click to enlarge
What we see here is that the exception did not happen, so we never caught it and the catch block was never executed, causing incomplete line coverage. More over, there is no easy way to mock out a condition that will make it happen, because the resource is always there and the build system will ensure it is there with all the rest of the classes.
And no, I do not want to propagate an IOException by declaring it in the throws clause of the method, because I do not want to expose the clients to the implementation details (could become SQL in the future). Also even if I do, the client code will have to handle the exception and then the client code will not be covered. So I wrapped the exception into a RuntimeException, which is unchecked. I could use a different unchecked exception, but chose not to in order to keep it simple.
Back to the general idea. Sometimes we face a situation when we are forced to handle exceptions that will "never" happen. That is, "never" in the sense that they can only happen due to improper application assembly. They are not expected. Consequently there is no point in writing any recovery code or test cases for those conditions. However, checked exceptions force us to write those try/catch blocks, and those blocks are not testable.
This is not about trying to reach 100% coverage. That question deserves its own discussion. As noted in a very insightful article you should only use code coverage as a "clue" to places in the code that may contain bugs. Some of these clues may be false alarms, i.e. a piece of code is not tested but it does not contain bugs. The problem that comes with checked exceptions is that they produce many more of these false alarms. Suppose you run code coverage tool once a week and then go through results in search for bug-prone areas. Because try/catch blocks similar to the one above will never be covered you will be forced to return to that piece of code once a week, review it and say "Oh, it's just that condition that will never happen" and move on. Of course, it will only take you several seconds to verify it depending on code complexity, but if your code contains hundreds of places like this you may waste hours of your time, every week.
I am thinking about writing a follow-up with a conclusion in which I want to use points made for and against checked exceptions by others and put them on some sort of imaginary scale. I also have a middle-ground solution to the problem. But this is next time.
Thank you for reading. Your comments are welcome.