Monday, November 14, 2011

Passwords

I remember, when I first started this blog, I said to myself that I wanted to make at least one post a week. When that started slipping, I promised myself to stick to at least once per month. I was shocked the other day when I noticed that I hadn't posted anything since July! I've been slacking off!

Well, not really. I just hit one of those stretches where things have just been flat out swamped. In that time, though, I have been experimenting with some pretty cool stuff.One neat combination of tools that I have found particularly useful is KeePass (KeePassX for Linux users) and Dropbox.

KeePass, if you are unfamiliar, is a password management solution. It stores usernames and passwords for you, so you don't have to remember them all. The password database is itself password protected, so that you only have to remember a single password instead of 37 million different passwords.

If you have used KeePass before, it is very possible that you are familiar with one of the biggest headaches of it. When you go to log in to something that requires a password, you have to go and open up KeePass, find the correct entry, right click, copy username, back to your form, paste username, repeat for password, go bang head against wall in boredom - you know, the usual.

One of the best features of KeePass, though, is also one that I find very few people actually are aware of - the global auto-type functionality. This makes it so that, as long as KeePass is running, you just have to hit the special key combo, and KeePass will automatically type in the correct username and password into the fields for you. It does this by matching the title of the window currently open with the title field inside of KeePass. If multiple entries match, it will give you a popup window allowing you to choose the correct one.

With this extra functionality in hand (it is amazing what reading through the manual of your tools will lead to!), KeePass transformed for me from a semi-useful tool that I could sometimes use, to an indispensable service that I use all the time. I turned off the password remembering features of all my browsers, and stored everything in KeePass instead. Access to any site is now only three key presses away, and I never have to worry about remembering the right passwords for things. Also, if my computer were to be stolen, the thief wouldn't have immediate access to everything. So long as I use a sufficiently difficult password, all of the rest of my identity is safe.

Aha, I can hear the naysayers among you even now. What if you are on a different computer? How do you keep this password database in sync? Well, my friend, that is where Dropbox comes in to play.

Dropbox is an application that can keep files synchronized across multiple systems. You can get a free account that will give you up to 2G of storage. So I just put my database in my Dropbox folder, and I install Dropbox on both my work and home computers, and now I have access to my database in both places. You can even, if you choose, put the database file in the Public Dropbox directory that they give you, which automatically gives the file a public URL that you can access from any device with an internet connection. This literally gives you access to your database from anywhere.

The final nail in the coffin for me was when I found out that all this was available on my iPhone as well. I use the Dropbox app and the MiniKeePass app, and I get the same access to my passwords from my phone as I do from a computer.

Anyway, I understand that this post feels more like a sales post than an informative post, but I thought it was a good way to get my feet wet again. It also helps just a little to spread the word about an app combo that has become indispensable to me in my everyday life.

Monday, July 11, 2011

Mocking LDAP Servers for JUnit

As I have gone over before, I am a fan of unit testing. The list of benefits is quite extensive. However, one of the chief hurdles of unit testing is "I don't know how to test that...". I ran into just such a problem the other day. I have a project that utilizes an LDAP server for it's back end datastore. While support for databases in unit tests is a fairly flushed out field, support for LDAP servers is not-so-much.

Luckily, though, the good people over at UnboundID have a good solution to help us out. Their LDAP SDK for Java has been a very handy tool I have used for quite a while for handling all of my LDAP connections. Recently, with their 2.1.0 release, they unveiled a new feature specifically for this issue - the in-memory directory server. However, attempting to implement this in-memory server was a little bit more challenging than the documentation would make it seem. There are just a few gotchas to be aware of.

The documentation does contain an example that shows the java code you need to get going on this. The part that is tricky, though, is this line:
server.initializeFromLDIF(true, "/tmp/test.ldif");
There are a couple of problems here. First off, it is, well, wrong. The actual method name is not initializeFromLDIF, but rather importFromLDIF. So it should look like:
server.importFromLDIF(true, "/tmp/test.ldif");
The second, and slightly more significant problem, is that there is no documentation on exactly what needs to go into test.ldif. You are left to figure that one out on your own.

The ldif file that you use to import MUST contain the definition for whatever value you specify as the base DN in the InMemoryDirectoryServerConfig constructor. It may not contain any definitions of the parents of the base DN (which makes sense, being the BASE DN), and it may not specify the definition of further children without specifying the definition of the base DN.

So, to go along with the example, here is a sample LDIF you can use to get you started:
dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example
That is the minimum required definition of the base DN. Note that the following LDIF does NOT work:
dn: dc=com
objectClass: top
objectClass: domain
dc: com

dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example

If you want to specify any further branches, that may be done in the same LDIF file, so long as you still include the top portion as above. For example, if you have a People and a Groups branch, your initial LDIF may look something like this:
dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example

dn: ou=People,dc=example,dc=com
objectClass: top
objectClass: organizationalUnit
ou: People

dn: ou=Groups,dc=example,dc=com
objectClass: top
objectClass: organizationalUnit
ou: Groups

So, that covers the simple example test case, but I'm guessing that is not enough for most people. More than likely, you already have a real LDAP server out there, and you are wanting to mimic that server in your In Memory instance, so you can test in an environment that matches your production environment.

That is not necessarily apparently easy right off the bat. You need to replicate your server's schema into the in memory one, replicate all of the branches that you use, and pull in some real data to test with.

As it turns out, replicating your existing server is not too terribly complicated. If you happen to have an LDIF file that defines your server's schema on hand, you can include that into your project, and the following code should load it up:
InputStream schemaLdif = this.getClass().getClassLoader().getResourceAsStream("schema.ldif");
Entry schemaEntry = new Entry(IOUtils.toString(schemaLdif, "UTF-8"));
Schema newSchema = new Schema(schemaEntry);
config.setSchema(newSchema);
The above lines would be inserted into the sample code from the UnboundID website, before the call to the InMemoryDirectoryServer constructor. Note that I have not actually performed the above method myself, as I do not have my server's schema LDIF handy.

For me, and many others out there, you may not have access to the schema LDIF file of your server. Luckily, their is an easier way. First you must actually make a connection to your server, so that you have an LDAPConnection object pointing to it. Then you use the following code snippet:
LDAPConnection connection = //however you get your connection object
newSchema = Schema.getSchema(connection);
config.setSchema(newSchema);
The advantages to this method are that you always have the most up to date server schema, and you don't have to worry about tracking down and storing a hard copy of your server's current schema. The down sides to this method are the dependence on your server being up and available, and it is a little slow - my testing took about 6-7 seconds to retrieve the schema.

Storing the schema you retrieve from the server in a static variable can help with the performance bite, as it would only have that 6-7 second delay for your first test. However, there is nothing you can do about the dependence on the external system when utilizing this method, so that is a personal choice you have to make.

Once you have the schema loaded, you still need to import the above LDIF file for your base DN, as well as any test data you might have. All of that can be accomplished with the above importFromLDIF call. Since you should be matching your server's schema exactly, you should be able to just export records directly from your server into an LDIF file, and import them directly into your in memory server.

Once you do that, you should be good to go, ready to execute your tests against a clean, save, local environment.

Friday, May 6, 2011

Extending ParametersInterceptor to get Request Header data

Sometimes, particularly when you work with OpenAM, you need to be pretty friendly with the request header. I am sure a lot of other things use it as well, but OpenAM is where I have seen it the most - once you log in, OpenAM passes user attributes back in the request header. This is very useful, however, it does present one annoyance when your web application makes use of Struts, and that is the fact that you need to extract the parameter from the request header.

This is not a big deal, just a little bit of an annoyance. Basically, you need to follow the steps here for making your action ServletRequestAware. This instructs Struts to inject the HttpServletRequest object into your action. Once you do that, you can simply call request.getHeader("headerName") to get the header object you are looking for.

That is nice and all, but being the inquisitive and annoying personality I am, I wanted something cleaner. I don't like having to deal with the HttpServletRequest object - that is one of the benefits of Struts, hiding that stuff. So I set out to make an interceptor to do it for me. As it turns out, it was quite a bit easier than I would have expected.

First off, lets define the goal. I wanted an interceptor that would read parameters from the request headers, and inject them into my action class. So, for example, if I want the user's uid attribute, I just want to have the following in my action class:
private String uid;

public void setUid(String uid)
{
    this.uid = uid;
}

public String getUid()
{
    return uid;
}

Since this functionality is very similar to what the params interceptor does for me already, I looked there first. The Struts params interceptor points to the class ParametersInterceptor, included in the Struts 2 library. This is a fairly beefy bit of code, but one thing jumped out in particular:
/**
 * Gets the parameter map to apply from wherever appropriate
 *
 * @param ac The action context
 * @return The parameter map to apply
 */
protected Map<String, Object> retrieveParameters(ActionContext ac) {
    return ac.getParameters();
}

As it turns out, the Struts authors have been very kind to us here, as the ParametersInterceptor is built to be quite easily overridable. So, to build our interceptor, the code can be quite simple, really. All you need is this:
public class RequestHeaderInterceptor extends ParametersInterceptor implements StrutsStatics {
    private static final long serialVersionUID = 1L;
 
    /**
     * Gets the parameter map to apply from wherever appropriate
     *
     * @param ac The action context
     * @return The parameter map to apply
     */
    @Override
    protected Map<String, Object> retrieveParameters(ActionContext ac) {
        Map<String, Object> params = new HashMap<String, Object>();
  
        HttpServletRequest request = (HttpServletRequest) ac.get(HTTP_REQUEST);
        Enumeration<String> names = request.getHeaderNames();
        while ( names.hasMoreElements() )
        {
            String name = (String)names.nextElement();
            String val = request.getHeader(name);
            params.put(name, val);
        }
  
        return params;
    }
}

One more minor annoyance to this is that the ParametersInterceptor, and therefore this new RequestHeaderInterceptor, needs to be in the middle of the interceptor stack. That means that you must forgo the use of the defaultStack, and roll you own instead. It would look a little something like this:
<interceptor-stack name="customDefaultStack">
    <interceptor-ref name="exception"/>
    <interceptor-ref name="alias"/>
    <interceptor-ref name="servletConfig"/>
    <interceptor-ref name="i18n"/>
    <interceptor-ref name="prepare"/>
    <interceptor-ref name="chain"/>
    <interceptor-ref name="debugging"/>
    <interceptor-ref name="scopedModelDriven"/>
    <interceptor-ref name="modelDriven"/>
    <interceptor-ref name="fileUpload"/>
    <interceptor-ref name="checkbox"/>
    <interceptor-ref name="multiselect"/>
    <interceptor-ref name="staticParams"/>
    <interceptor-ref name="actionMappingParams"/>
    <interceptor-ref name="requestHeader" />
    <interceptor-ref name="params">
      <param name="excludeParams">dojo\..*,^struts\..*</param>
    </interceptor-ref>
    <interceptor-ref name="conversionError"/>
    <interceptor-ref name="validation">
        <param name="excludeMethods">input,back,cancel,browse</param>
    </interceptor-ref>
    <interceptor-ref name="workflow">
        <param name="excludeMethods">input,back,cancel,browse</param>
    </interceptor-ref>
</interceptor-stack>

Nice and simple, and now you can get your request headers directly injected into your action.

One more optional step, if you don't want to have all of your actions injected but prefer a more manual step to request that they be injected, you can create a custom interceptor, like so:
public interface RequestHeaderParameterAware
{
}

Then you update your RequestHeaderInterceptor class, adding the following:
@Override
    public String doIntercept(ActionInvocation invocation) throws Exception {
        Object action = invocation.getAction();
        if (action instanceof RequestHeaderParameterAware) {
            return super.doIntercept(invocation);
        }
        return invocation.invoke();
    }

With that little bit of extra code, you now have a system in which any actions implementing the RequestHeaderParameterAware interface will have request headers automatically injected.

Thursday, April 28, 2011

Letting the locksmith install the lock

Authentication and authorization of web applications seems to be one of those things that is handled in a million different ways. It is frequently built right into the application itself. Often, building a login page is one of those things that is used as a beginners tutorial for writing code that hits a database. However, it is also one of those things that is so incredibly easy to mess up that it is scary.

Hackers are getting smarter and smarter all the time. Not so much we can do about that. However, we do have one advantage - authentication technology is improving all the time as well, if you choose to take advantage of it. That is one reason why I promote the use of third party tools for authentication whenever I can. They have done it before, they know the pitfalls, and they know how to avoid them.

The solution that I find the best is a product called OpenAM. Originally released as OpenSSO under Sun, the product was spun off from OpenSSO after Oracle bought Sun and announced they would no longer be supporting OpenSSO. Forgerock now controls OpenAM, and it is still pushing the boundaries of secure web access. On top of that, it is relatively easy to set up and use, and has a whole lot of other benefits besides secure authentication.

The way OpenAM works is that it is actually a standalone server application. It has a nice built in web interface for configuring all of the authentication and authorization rules for all of your websites. It basically sits out there all by itself, waiting to direct traffic coming into your server.

Now here is the fun part. OpenAM also has what they call Agents. An Agent is a plug in of sorts that you install on your web server. They have agents for most of the major versions of most of the web servers out there. The Agent sits on your server, and listens to all of your incoming traffic. It also has a list of URLs that it needs to be aware of. This may be a blacklist or a whitelist, depending on how you set it up, but the long and short of it is this - the Agent knows, from the OpenAM server, what URL accesses need to be authenticated.

If a request comes in for a URL that the Agent knows it needs to protect against, it checks for a secure cookie from the OpenAM server. If the requester does not have that cookie (as is the case for first time requests), it sends a redirect over to the OpenAM server. It is kind of like a cop guarding a building, and if you don't have ID he sends you to the courthouse to get some.

Once the user is redirected to the OpenAM server, they see a login page there. Once they provide satisfactory credentials, the OpenAM server sends them back to where they were heading in the first place. The server acts like the judge, who has given his OK for you to go into the building. Having done that, he sends you on back to the cop.

Once the user gets back to your web server, the Agent sees that the OpenAM cookie now exists. Wanting to verify that it is a good cookie, being the good cop he is, he gives the judge a quick call. The Agent connects directly with the OpenAM server, and asks if the cookie is still a valid one. Assuming an all clear from the server, the Agent sends the request on to the final destination.

And that is the gist of it. All of the authentication is handled by much more experienced folk than you or I, and everybody is happy. As an added benefit, this is also what is known as a single sign on solution. That cookie that the user receives from the server? It sticks around in their browser as long as it is open. That means that the user can go to any web page on any server protected by the OpenAM server, and never have to log in again.

In addition, there are other benefits such as complex authorization rules for whol does or does not have permission to access certain parts of your site, as well as built in support for being a SAML2 Identity Provider. And to top all that off, all of these benefits are free. Free makes everything taste better.

I plan on following this post up with more specifics on how to get OpenAM set up and running, so stay tuned.

Friday, March 25, 2011

Hibernate Audit Logs with Spring Security

I have recently had the worst time finding a solution to a seemingly simple problem - I had created a Hibernate interceptor to automate audit logs in the database, but I couldn't figure out how to obtain the currently logged in user.

If you've never created a Hibernate interceptor, there is really only one critical piece of information you need to know related to that - there is no way that I know of to obtain the current HttpRequest object inside of the interceptor. This was my biggest problem, as this system stored the currently authenticated user object in session, but I couldn't find a way to get to the session.

In researching how to do this, I read in a lot of places that you should implement your login solution using Spring Security. That way, from literally anywhere in the system, you can use the following code to retrieve the authenticated user:
SecurityContext secureContext = SecurityContextHolder.getContext();
Authentication auth = secureContext.getAuthentication();
Object principal = auth.getPrincipal();

String userName = null;
if (principal instanceof UserDetails) {
   UserDetails userDetails = (UserDetails) principal;
   userName = userDetails.getUsername();
} else {
   userName = principal.toString();
}

I was glad that there was such an easy solution to my problem, except for one minor hiccup - the project I was working on didn't use Spring Security for authentication. I also did not have the time or the authority to change that.

However, with a little more digging, I found that you can add in Spring Security to other authentication systems, so you can make use of some of these other features. So I set out with a new goal - find the minimum amount of Spring Security configuration necessary in order to execute the above snippet.

The first step was relatively easy. It turns out that you just add the following code to your Login page, after the user successfully authenticates:
Authentication auth = new UsernamePasswordAuthenticationToken(
        user.getUsername(), 
        user.getPassword()
    );
SecurityContextHolder.getContext().setAuthentication(auth);

With just that code snippet alone, the audit logs seemed to work great. I tested it, everything was working great, so I pushed out my changes to the production system. Unfortunately, I then bumped into one of the pitfalls of only having one person testing - the above solution alone is not thread safe.

When first exposed to that statement, many Spring Security familiar developers will scoff at my inexperience, and let me know that SecurityContextHolder attaches to a ThreadLocal, so we should be good to go. However, a little bit of additional digging reveals the following weakness:
In an application which receives concurrent requests in a single session, the same SecurityContext instance will be shared between threads. Even though a ThreadLocal is being used, it is the same instance that is retrieved from the HttpSession for each thread. This has implications if you wish to temporarily change the context under which a thread is running. If you just use SecurityContextHolder.getContext().setAuthentication(anAuthentication), then the Authentication object will change in all concurrent threads which share the same SecurityContext instance.

As it turns out, there is one more piece of critical configuration you need in order to enable all of this to work successfully. You need to create a Spring bean for the class SecurityContextPersistenceFilter. From the same page as above:
In Spring Security, the responsibility for storing the SecurityContext between requests falls to the SecurityContextPersistenceFilter, which by default stores the context as an HttpSession attribute between HTTP requests. It restores the context to the SecurityContextHolder for each request and, crucially, clears the SecurityContextHolder when the request completes.

So the final piece of the puzzle is to add the following line to your applicationContext:
<bean id="securityContextPersistenceFilter" class="org.springframework.security.web.context.SecurityContextPersistenceFilter"/>

So, long story short, the above three code snippets are all that you need in order to implement Spring Security's ability to retrieve the current user from anywhere in your system.

Wednesday, March 2, 2011

Novell IdM Local Variables

This is a post that is really going to make sense only to those of you who work with building policies with Novell's Identity Manager system. I ran into this problem recently, and wanted to get it out into the ether in case others run into the same issue.

In Novell policies, it is possible to create local variables. One typically uses these local variables in order to temporarily store information about a user. There are two different types of scope that you can use: policy or driver.

The policy scope I was already well versed with. It is available to any other rule within the policy, allowing you to transfer information from one rule to the other.

The driver scoped local variable was, I assumed, a way to transfer information between policies about a particular transaction. What I have discovered, though, is that driver scoped means that the variable remains set for as long as the driver is running, across all transactions.

This came to bite me due to how I was using the variable. Inside of the Create Rule, I was setting a local variable "genCN", which is a generated CN value. I was then setting this to be the default value for the CN attribute. Note that I was not setting the Operation Attribute CN, but rather the default value for CN - thus I can not access Operation Attribute CN later in the transaction.

Later, in the Command Rule, I needed access to "genCN" if it had been set earlier. So I did an if local variable is set call, expecting that for updates the genCN would not be set, and I would need to query the meta tree to obtain the object's CN. Come to find out, though, that I was picking up a CN value from that local variable when an object was being updated - but unfortunately for me it had the value of a prevous entry's CN. Oops.

Why in the world driver-scoped local variables would function like that, I have no idea. They basically now take on the role of GCVs that you can update dynamically. Furthermore, there is no way to pass information about a particular transaction from one policy to another, unless you are sure to be setting that variable each time through the driver.

Oh well, such is a Novell user's life, I suppose. I was able to come up with a work around for my issue, and no harm done. Basically, I just added a policy at the front of my driver that sets that local variable to an empty string, and instead of doing an is set check, I do a check that it is not equal to empty string. Just wanted to throw this out there in case anybody else was having a similar issue.

Tuesday, March 1, 2011

Maven Release Plugin

I have recently started using a plugin for Maven that has been an incredible time saver, so I thought I would post a quick tutorial on what it is and how to use it. It is called the Maven Release Plugin, and when developing a java library it should be your new best friend.

Basically, this plugin automates a ton of the little manual steps that you have to go through every time you perform a release. It will:
  1. Increment the version number of the project in the pom
  2. Check the updated version number into svn
  3. Add a tag to the project in svn for the current release
  4. Compile, Test, Package
  5. Upload the packaged jar file to your local Maven repository
  6. Append “-SNAPSHOT” to the current version in the project pom for continued development
  7. Check the updated version number into svn

It makes a few assumptions in order to execute successfully:
  1. Before trying to release, all of your local changes must be checked in. It will only release what is current in svn.
  2. Your current version number, the first time you use this, must end with “-SNAPSHOT”
  3. You must have a local Maven repository set up

Once you have this all set up, then releasing a new version of your library becomes as easy as checking your regular code changes into svn and running a simple maven command. No more worrying about remembering to increment your version number, no more concern about not having things checked into svn. It mandates a lot of the best practices that one should be following anyway.

In order to get this set up, there are a handful of steps that must be followed:
  1.  You must have the “svn” command on your command line execution path. For Windows, I downloaded and installed SlikSvn, which automatically adds the svn command to your path (requires a restart after installation)
  2. Add the following block to your project’s pom.xml at the top level under the root <project> node:
    <scm> 
       <connection>scm:svn:https://your.svn.url/ProjRoot</connection> 
       <developerConnection>scm:svn:https://your.svn.url/ProjRoot</developerConnection> 
    </scm>
    
  3. Add the following block to your project’s pom.xml under <build><plugins>:
    <plugin>
       <groupId>org.apache.maven.plugins</groupId>
       <artifactId>maven-release-plugin</artifactId>
       <version>2.1</version>
    </plugin>
  4. If you are not already set up to deploy to your local Maven repository, you need to add the following block under <build><extensions>
    <extension>
       <groupId>org.apache.maven.wagon</groupId>
       <artifactId>wagon-webdav</artifactId>
       <version>1.0-beta-2</version>
    </extension>
  5. If you are not already set up to deploy to your local Maven repository, you need to add the following block under <distributionManagement>
    <repository>
       <id>internal</id>
       <name>Internal Release Repository</name>
       <url>dav:http://your.internal.repository</url>
    </repository>
In order to run the plugin, you just execute "mvn release:prepare release:perform", or from Eclipse you would do a Run As > Maven build... and put "release:prepare release:perform" in the Goals.

That's all there is to it. With a single command, you can now handle versioning, building, testing, and deploying of your java library.

Sunday, February 20, 2011

Greatest Weakness

I am sure, if you have been on even a single interview, chances are good that you have been hit with The Question. You know the one - the question that most everybody hates, and to which there is no real good answer out there. "What is your greatest weakness?" How are you supposed to come up with an answer to that question, when you are trying to make yourself look as good as possible to these people? And why the heck am I wasting time writing about a silly interview question inside of a tech blog?

Well, I have been thinking about this a lot recently, and it occurs to me that this question is actually one of the keys to being a good programmer. For example, think about that guy that most of us has met at least once - thinks he is God's gift to programming, when in actuality you wouldn't let him touch your code with a 10 foot pole. Programmers like this are severly handicapped by their own mental image of themselves - believing oneself to be God-like prevents you from confronting and overcoming your own limitations.

While this fact is true in other professions as well, in programming it is particularly important because there are concrete methods we can use to improve ourselves. You should constantly be looking at your own development as a programmer, and finding those places where you suck. Rest assured, there is some way in which you suck. Even the greatest programmer in the world sucks in some way. Find your suck, and fix it.

I am hardly the first person to think of this. Jeff Atwood's blog Coding Horror is one of the few blogs that I follow closely. He has a great many posts in this vein:

I think, though, that this can be taken beyond just being aware of your weaknesses. Really, I think, the key to improving as a developer is self honesty. As the Greeks say, Know Thyself. Try to get a good handle on what your strengths are, and what your weaknesses are. In fact, try to actually go so far as to write out a list of your strengths and weaknesses. Make them real weaknesses, too, not those cheesy, strengths-worded-as-weaknesses (i.e. "Sometimes I just work too hard and get too much done" - eck). As an example, I will expose my own lists in this area. Strengths:
  • I am quick - I can typically bang out changes at a quicker than expected pace
  • I am pretty good at debugging hard to figure out bugs
  • I am good at working with legacy systems and retrofitting bad code
Weaknesses:
  • Once I find a solution, I can latch onto it, and have a hard time leaving it for new solutions
  • I can be kind of absent minded, and forget things that are not directly part of my attention at the moment
  • I can overlook some edge cases when testing a system

Once you have your list, figure out what you are trying to do to overcome them:
  • Once I find a solution, I can latch onto it, and have a hard time leaving it for new solutions - so I sometimes, time permitting, try to discard a solution once I find it, and start from scratch, going in a different direction.
  • I can be kind of absent minded, and forget things that are not directly part of my attention at the moment - so I make sure to make heavy use of Outlook, both with the calendar and task lists, so that it reminds me to do stuff when I need it.
  • I can overlook some edge cases when testing a system - so I always try to spend extra time looking for edge cases, and sometimes ask others for opinions, to try to flush out my testing.

If you are really honest with yourself, and really understand where your strengths and weaknesses lie, you can make use of that both in your current job and in any future job interviews. By understanding yourself, you can know when and where you should volunteer for tasks, and where you should ask for help. And when you are in that interview, and the dreaded question comes up, you can give them one hell of a surprise by actually answering it with a real weakness, paired with your plan for strengthening that weakness - which may actually give you a heads up, as it will set you apart from all of the "I just care too much!" answers.

Thursday, February 10, 2011

Metadata Hell

I recently had the opportunity to attend JavaOne in San Francisco, which was a very great overall experience. One thing that I noticed, though, was the apparent love affair with annotations. It would seem that annotations are The Next Big Thing. In almost every session that I attended, there was some person going on and on about how great annotations are. Look, you can configure things on a single line! OOOH, you can configure properties in line with your code! Holy Crap, you can embed connection information into annotation tags directly in your java code!

No, I really was not kidding about that last one. Apparently, the new best practice in Java is to put all of your configuration information into annotations. By all, I really mean all configuration information, right down to your database connection strings. The very thing that I have been taught since I saw my first for loop, that which has been banged into me repeatedly as the worst danger in the world, is now best practice!

We are now hard coding this stuff in our code.

From what I can gather, this was put forth as the solution to what was commonly referred to as XML Hell. People didn't like all of the XML configuration that was having to occur to get applications running. I can understand this concern - XML files can be a pain in the rear end. I think, though, that the mark was missed a bit on this one. All we have really done is trade out one Hell for another.

My biggest concern with annotations is the unnecessary complication of the code files. Take the following example, taken from a Struts2 convention plug-in example:
package com.example.actions;

import com.opensymphony.xwork2.ActionSupport; 
import org.apache.struts2.convention.annotation.Action;
import org.apache.struts2.convention.annotation.Actions;
import org.apache.struts2.convention.annotation.Result;
import org.apache.struts2.convention.annotation.Results;

@Results({
  @Result(name="failure", location="fail.jsp")
})
public class HelloWorld extends ActionSupport {
  @Action(value="/different/url", 
    results={@Result(name="success", location="http://struts.apache.org", type="redirect")}
  )
  public String execute() {
    return SUCCESS;
  }

  @Action("/another/url")
  public String doSomething() {
    return SUCCESS;
  }
}
Here is the exact same Action class, without the annotations:
package com.example.actions;

import com.opensymphony.xwork2.ActionSupport; 

public class HelloWorld extends ActionSupport {

  public String execute() {
    return SUCCESS;
  }

  public String doSomething() {
    return SUCCESS;
  }
}
That is a dramatic reduction in complexity, and that is just on a simple example application. What is more, it is now encouraged that every single third party library, framework, whatever, should now all use annotations. Look at the above code again: that is the added complexity from a single framework's configuration. What is going to happen when you have a framework and a handful of libraries and they all want to use different annotations?

The rub of it is, in my mind, that there was another solution available - simplify the XML. Having XML files isn't a problem; it provides a nice, isolated place to put the metadata of your application, keeping it separate from the logic of your application. However, most XML configuration is very complex, large, tree-like structures. Take, for example, the XML needed to configure a Java servlet:

    HelloWorld
    com.jspbook.HelloWorld


    HelloWorld
    /HelloWorld

Here is the annotation configuration version:
@WebServlet(name="HelloWorld", urlPatterns={"/HelloWorld"})
I can admit that the above configuration is more simple than the current configuration. However, here would be a proposed alternative in XML, which would also reduce it to a single line:
<servlet name="HelloWorld" urlPattern="/HelloWorld" class="com.jspbook.HelloWorld"/>
With that, you have the simplicity of a single line of configuration, but still safely tucked away in a separate XML file.

In fact, it is my theory that there is nothing that you can do with annotations that you can't do in just as few lines in an XML file. The only difference is that in the XML file you need to specify what class you are referring to. That is it.

In short, while I see the need for something to be done to reduce the configuration pains that we as developers currently experience, I absolutely do not think that the answer is to move that pain to a place where I have to stare at it and read through it every single day. There are other, better solutions out there, if we would but take the time to look for them.

Wednesday, February 2, 2011

Struts2 vertical radio list without custom template

I have seen a lot of questions sent out into the void of the internet, wondering how you could do something as simple in Struts2 as make a radiobutton list display vertically instead of the default horizontally. The answer always seems to be writing a custom template for it. While this is an effective solution, it often does not fit the problem at hand.

I had an issue not long ago where I had to put a radio list inside of a grid, interspersed with other form elements. In this case, writing a custom template with <br> tags after each radio button did not work. So I came up with another way of handling this pesky situation. It is not quite as elegant as writing a custom template, but it will get the job done.

Basically, what you have to do to lay the radios out vertically is to use multiple radio tags, each pointing to the same selector variable, each with a limited version of the actual list. Here is a simple example:
<s:radio name="valueField" list="#{'value1':'label1'}"/><br>
<s:radio name="valueField" list="#{'value2':'label2'}"/><br>
<s:radio name="valueField" list="#{'value3':'label3'}"/>
That will display one radio button per line, still connected as a single list. It will default if "valueField" is already set, and it will successfully post back into "valueField".

As a bit meatier of an example, take the following code. This code shows an Action class with a List of Objects, which we will then iterate over in the jsp page. This shows a more typical setup.

ListObject.java:
public class ListObject {
    private String key;
    private String label;

    public ListObject(key,label) {
        this.key = key;
        this.label = label;
    }

    public String getKey() {
        return this.key;
    }

    public void setKey(String key) {
        this.key = key;
    }

    public String getLabel() {
        return this.label;
    }

    public void setLabel(String label) {
        this.label = label;
    }
}
ListAction.java
public class ListAction extends ActionSupport {
    private List<ListObject> myList;
    private String mySelection;
    
    public String input() {
        myList = new ArrayList<ListObject>();
        myList.add(new ListObject("val1","label1"));
        myList.add(new ListObject("val2","label2"));
        myList.add(new ListObject("val3","label3"));

        mySelection = "val2";

        return INPUT;
    }

    public String execute() {
        // Do stuff with list here
        return SUCCESS;
    }

    public List<ListObject> getMyList() {
        return myList;
    }
    public void setMyList(List<ListObject> newList) {
        this.myList = newList;
    }

    public String getMySelection() {
        return mySelection;
    }
    public void setMySelection(String newSelection) {
        this.mySelection = newSelection;
    }
}
list.jsp

    <s:radio theme="simple" name="mySelection" list="#{key:label}"/> <br>


At the core of this method is the specifying of independent sublists for each radio tag. The syntax inside the list attribute is the way you specify an in place list - "#{'key':'value','key2':'value2'}". For our lists, though, instead of using string literals, we are pulling the values from our iterator. Inside an iterator tag, the iterator object is at the top of the value stack, so that just referencing the variable names of the data members of the object pulls out the value successfully. Then Struts magic takes care of combining all those radio tags with the same name to point to the single back end data member.

Creating custom templates can often be the better solution, but in situations where that doesn't work, this is a good way to go as well.

Tuesday, February 1, 2011

Creating a custom Struts2 validator

Recently I was working on a project which had a very specific validation requirement, that the Struts2 validation framework did not currently support. I had a dynamic collection of objects, and I needed to make sure that an attribute of that object was distinct across the entire collection.

I could have added a validate method to my action class and perform the validation there manually, but I wanted to keep everything in the xml validation. So I used the opportunity to learn how to write a custom Struts2 validator.

As it turns out, it is not nearly so complex as it sounds. To start things off, I created a new class that extended FieldValidatorSupport, since I wanted this validator to be a field validator. I then overrode the "validate" method, which does the actual validation. So here would be the very basic validator:
package my.package.name;

import com.opensymphony.xwork2.validator.ValidationException;
import com.opensymphony.xwork2.validator.validators.FieldValidatorSupport;

public class UniqueCollectionValidator extends FieldValidatorSupport
{ 
    @Override
    public void validate(Object object) throws ValidationException
    {
    }
}

I am going to jump ahead now and show you how to hook up said validator, because that comes into play in building the validator. So, first step in wiring up a custom interceptor is adding a file called validators.xml to your project's classpath. The contents of the file should look like this:

    


Second step is just referencing the validator. Open up your action's validation.xml file and add the following validator underneath the field you want it to apply to:


That is everything needed to wire up your custom validator. Now we get to the fun part - making your validator, you know, validate something. First off, and this is the part that needed the wiring in place to properly show, is adding parameters. As it turns out, you can add as many parameters to your validator as needed, and it does not take much. In this case, we need as a parameter the name of the attribute on the object that we need to be distinct.

First step to adding a parameter is to add the private member variable to your validator class, and give it getters and setters, like so:
package my.package.name;

import com.opensymphony.xwork2.validator.ValidationException;
import com.opensymphony.xwork2.validator.validators.FieldValidatorSupport;

public class UniqueCollectionValidator extends FieldValidatorSupport
{
    String uniqueFieldName = "";

    public String getUniqueFieldName()
    {
        return uniqueFieldName;
    }

    public void setUniqueFieldName(String uniqueFieldName)
    {
        this.uniqueFieldName = uniqueFieldName;
    }

    @Override
    public void validate(Object object) throws ValidationException
    {
    }
}

Once we have getters and setters in our validator, we populate them by simply adding those elements to our xml validator.

    <param name="uniqueFieldName">myUniqueFieldName</param>


All that is left now is filling in the details of the validator.
package my.package.name;

import com.opensymphony.xwork2.validator.ValidationException;
import com.opensymphony.xwork2.validator.validators.FieldValidatorSupport;

public class UniqueCollectionValidator extends FieldValidatorSupport
{
    String uniqueFieldName = "";

    public String getUniqueFieldName()
    {
        return uniqueFieldName;
    }

    public void setUniqueFieldName(String uniqueFieldName)
    {
        this.uniqueFieldName = uniqueFieldName;
    }

    @Override
    public void validate(Object object) throws ValidationException
    {
        String fieldName = getFieldName();
        Object val = getFieldValue(fieldName, object);
        
        List collection = null;
        List<String> soFar = new ArrayList<String>();
        
        if ( val instanceof List )
        {
            collection = (List) val;
        }
        else
        {
            return;
        }
        
        for ( Object obj : collection )
        {
            String toCheck = "";
            
            if ( this.uniqueFieldName.length() == 0 )
            {
                toCheck = (String) obj;
            }
            else
            {
                toCheck = (String) getFieldValue(this.uniqueFieldName,obj);
            }
            
            if ( soFar.contains(toCheck) )
            {
                addFieldError(fieldName, object);
                return;
            }
            
            soFar.add(toCheck);
        }
    }
}
This is kind of a lot of code to just dump on you, but most of it is pretty straightforward. There are just a few useful little bits to point out.

First off, when you enter a field validator, the object passed in contains the field parameter we are trying to validate against. In order to get the specific field we are dealing with, we use two member functions of FieldValidatorSupport - getFieldName and getFieldValue.
String fieldName = getFieldName();
Object val = getFieldValue(fieldName, object);
This gets for you the name of the field that we are validating against, and the current value of that field.

This validator can only work against List objects, so the first thing I do after that is make sure that it is a List, and cast the val appropriately. If it is not a List, then I just return without validating anything.
if ( val instanceof List )
{
    collection = (List) val;
}
else
{
    return;
}

At that point, I am able to move forward with my logic for determining uniqueness. The only other key piece of information here is the way to fail validation. If you hit the case where your validation fails, you can simply do the following:
addFieldError(fieldName, object);
return;
This will tell Struts that validation failed on this field, and it will use the normal method of choosing what message to display from the validation xml file.

That's all there is to it.

Sunday, January 30, 2011

Learning jQuery in the Greasemonkey playground

If you want to learn jQuery, but are not sure how to go about it, then you should go check out Greasemonkey, which is a Firefox add-on. Basically, what it does is it allows you to provide Firefox with a javascript file and a url pattern, and it will run the javascript on the page every time you go to it.

Most people kind of scratch their heads at this, not quite realizing the power of a tool like that. It is cool in and of itself, because it allows you to tweak your favorite webpages any way you like, and once installed it is transparent.

Another really cool benefit of it, though, is it makes a great testbed for practicing your javascript skills. You don't have to have an entire website to practice with, because you can practice on any site out there.

For example, here is a very small Greasemonkey script.

// ==UserScript==
// @name           Google Test
// @namespace      gtest
// @description    A small test script that picks on google
// @include        http://www.google.com/
// ==/UserScript==

document.getElementsByName("btnI")[0].style.display = "none";

All this does is remove the I'm Feeling Lucky button on the Google homepage. Not a lot, but as you can see, there is not a lot of buildup, either - you can just jump right in to doing javascript manipulations against whatever page you want.

The top part of the above script is the script metadata - it tells Greasemonkey what to do with the script. The most important part there is the @include tag - this is the URL pattern on which this script will execute.

What is even better is that you can start to play with jQuery as well. There is one more gotcha with jQuery - you need to include it into your script. You do this with a relatively simple little bit of metadata. Here is the same script, this time using jQuery.

// ==UserScript==
// ==UserScript==
// @name           Google Test
// @namespace      gtest
// @description    A small test script that picks on google
// @include        http://www.google.com/
// @require       http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.js
// ==/UserScript==

$("input[name=btnI]").hide();

Notice the @require metadata - that tells Greasemonkey to include that external javascript file into your script.

Alright, we have the scripts ready, but how do we start using this thing? There are just a few steps to get you going:
  1. Go to the Greasemonkey installation page, and install the add-on
  2. Grab the script text above, and save it onto your computer somewhere. Call it "googleTest.user.js". The ".user.js" part is convention when dealing with Greasemonkey scripts.
  3. With Firefox open and Greasemonkey installed, grab the googleTest file, and drag it onto Firefox. You should see a Greasemonkey pop up asking if you want to install. Tell it yes.
  4. Go to Google, and see that we have gotten rid of the I'm Feeling Lucky button.
All there is to it. Once you have the script installed, you do not have to go through the drag and drop process again. Simply go to Tools > Greasemonkey > Manage User Scripts... and select your script to edit. It should open the script in your favorite text editor, ready to edit away to your heart's content. As soon as you save the file, the script is updated in Firefox and ready to test out.

Another great tool that goes hand in hand with this process is Firebug. Most web developers probably already know and love Firebug, but just in case you are not familiar, it is another Firefox add-on that is incredibly useful for web development. It is particularly useful for writing Greasemonkey scripts, because you need to know what to edit with your javascript. For example, I used it to determine what element to grab to get rid of the I'm Feeling Lucky button on Google.

I have successfully used Greasemonkey to expand my own jQuery knowledge, and it was an effective and entertaining way of going about it. If you want to learn more about jQuery, I highly recommend this method. Good luck!

Saturday, January 29, 2011

A quick jQuery primer

I am sure that if you are a web developer, then at some point in your career you have had to deal with javascript. It also seems like more and more people are turning to jQuery. If you already use jQuery, this post is not for you. This is basically just a quick overview, mostly directed at people that are familiar with javascript, but have little to no jQuery experience.

So, to start things off, why use jQuery? Well, basically, jQuery is a javascript that works well. It simplifies a lot of the things in javascript, and makes things more intuitive. Just as a quick example, take the simple task of toggling visibility of a div on your page.

Take a simple page:
Toggle Div
This is some toggling text

Here is the code for toggling visibility in javascript:


Here is the code for toggling visibility in jQuery:


That was an 8 line reduction in code, not to mention a heck of a lot easier to understand.

The key to understanding most of jQuery is to look at it's selector technology. That is all the stuff inside the $() in the code snippet above. jQuery uses the $ as it's primary object - in fact, there are several things you can do just with $.whatever. With parens after the $, it means that it is searching for stuff.

The selector syntax is very similar, if not identical, to the CSS selector syntax. For example, if you do $("div"), that will return all of the divs on the page. Better yet, jQuery will take any operation you perform, and apply it to everything that is returned by the selector, so that $("div").toggle() would toggle the visibility of every single div on the page.

Just to cover some of the basics of the selector syntax, here is a quick and dirty list of some of the commonly used things:
  • "#" - used to reference a particular ID, such as $("#mydivid")
  • "." - used to reference a class name, such as $(".myclassname")
  • "," - used to concatenate selectors, such as $("#firstdiv, #seconddiv") to select both the div with the ID "firstdiv" and the div with the ID "seconddiv"
  • " " - used for hierarchy of selectors, such as $("#mydivid a") to select all links that are children of the div with the id "mydivid"
There is a very large list of functions that jQuery can call on things. Just a few of the commonly used ones:
  • val()/val("blah") - used to get or set the value of a form field
  • css("cssattr")/css("cssattr","attrval") - used to get or set any css attribute on the object
  • attr("attrname")/attr("attrname","attrval") - used to get or set any html attribute on the object
  • insertBefore(object)/insertAfter(object) - used to add new html objects to the page
  • parent()/children() - used to get the parent object or the children objects of the object
There are many many more things as well, far more than I could possibly get into here. I highly encourage you to check out the jQuery documentation for more information.

Friday, January 28, 2011

Creating Custom Struts2 Interceptors

Now we're into it. The meaty stuff. If you have not read my post on Struts2 Interceptors, I recommend you head over that way first. With me so far? Great, lets dig in.

So, why would you want to create your own interceptor? Granted, there are a whole bunch of provided interceptors. There are several that are provided but are not used in the default interceptor stack, that might be worth checking out. Things like the LoggingInterceptor (logs when you enter and leave an action) and the TimerInterceptor (tracks how long an Action take to execute) are wonderful tools that should be looked into. So, what is left?

Basically, if you ever find yourself adding code or wishing you could add code to each of your actions, you have a valid candidate for a custom interceptor.

The case study we are going to follow today is a problem I ran into a while back. I had a standard jsp header and footer that were included in every page of my site. I wanted to be able to turn on some debugging features in those pages whenever the struts.devMode constant was set to true. However, I couldn't find any way to check the struts.devMode constant from the jsp page. Could be that there is a way, but I couldn't find it. I did find that you could grab it from within your action class, and that you could set it to a member of your action class and access that from your jsp. Eck.

So I decided to make a custom interceptor. This interceptor would be responsible for getting the current value of the struts.devMode constant, and putting that on the Value Stack. In naming this interceptor, I went a step further, and called it GenerateConstants, figuring I could use this to inject any of the struts constants that I might need in the future. To kick things off, I started with the following code inside a file called GenerateConstants.java:

package mypackage.interceptor;

import com.opensymphony.xwork2.ActionInvocation;
import com.opensymphony.xwork2.interceptor.AbstractInterceptor;

public class GenerateConstants extends AbstractInterceptor {
    @Override
    public String intercept(ActionInvocation invocation) throws Exception {
        // Perform pre-action stuff here
        String result = invocation.invoke();
        // Perform post-action stuff here
        return result;
    }
}

Excellent, we have the base structure. Time to fill it out a little bit. First we add the devMode property, and have struts inject it for us.

package mypackage.interceptor;

import com.opensymphony.xwork2.ActionInvocation;
import com.opensymphony.xwork2.inject.Inject;
import com.opensymphony.xwork2.interceptor.AbstractInterceptor;

public class GenerateConstants extends AbstractInterceptor {
    private String devMode;

    @Override
    public String intercept(ActionInvocation invocation) throws Exception {
        // Perform pre-action stuff here
        String result = invocation.invoke();
        // Perform post-action stuff here
        return result;
    }

    @Inject("struts.devMode") 
    public void setDevMode(String devMode) {
        this.devMode = devMode;
    }
}

Finally, we push this value into the Value Stack

package my.package.name;

import com.opensymphony.xwork2.ActionInvocation;
import com.opensymphony.xwork2.inject.Inject;
import com.opensymphony.xwork2.interceptor.AbstractInterceptor;

public class GenerateConstants extends AbstractInterceptor {
    private String devMode;

    @Override
    public String intercept(ActionInvocation invocation) throws Exception {
        // Perform pre-action stuff here
        if ( devMode != null )
        {
            invocation.getInvocationContext().put("devMode", devMode);
        }

        String result = invocation.invoke();
        // Perform post-action stuff here
        return result;
    }

    @Inject("struts.devMode") 
    public void setDevMode(String devMode) {
        this.devMode = devMode;
    }
}

That is the entire amount of interceptor code needed. There is still a little more work to be done, though. Now comes the fun part - hooking it up! Crack on open your struts.xml file, and we'll work on the plumbing.

First thing that you have to do is to declare the interceptor, so that struts is aware that it exists. Inside your struts.xml file, if it does not already exist, add the interceptors tag, and add your interceptor to it, like so:


    
        
            
        
        
    


Next we need to create a new interceptor stack to use the new interceptor. Remember the default stack we talked about before? We are going to add to that now.


    
        
            

            
                
                
            
        
        
    


So here we are using the "interceptor-stack" tag to define a new stack. The child tag "interceptor-ref" just points to either an interceptor or an interceptor stack that is already defined. So in this case, we are defining our new stack with "generateConstants" as the first interceptor, followed by the "defaultStack" interceptor stack, which is the default stack provided with struts.

Finally, we have one more step to carry out. Right now, we have our shiny new stack, but we still have not told anything to use it. In order to do that, we can either declare our stack inside each Action, or we can override the default stack for all actions. The first method can be useful if you have a stack that only needs to be executed on a handful of Actions. However, our interceptor should be used by all Actions, so we are going to update the default. To do that, all we need to do is add a single tag called "default-interceptor-ref", outside of the "interceptors" tag, like so:


    
        
            

            
                
                
            
        

        

        
    


That's all there is to it. A few XML tags and a little bit of code, and you have a new interceptor.

Thursday, January 27, 2011

Struts2 Interceptors

Interceptors are one of the most powerful, and yet seemingly least understood feature of Struts2. The Interceptor framework is basically the core of the Struts2 framework, and understanding how it works is the key to really unlocking the power of Struts2.

For any given web request, there is a series of Interceptors that get fired off before your Action class is called. Each Interceptor performs a specific job that is required for every action. Some of the core functionality of Struts2 lives in these Interceptors by default, such as populating the member variables of your action class, validating data and exception handling.

The Interceptors fire based off of what is known as the Interceptor Stack. The Stack is just an ordered list of Interceptors that need to be fired prior to entering the action. When you include the struts core library in your project, it includes a file called "struts-default.xml", which defines all of the core Interceptors and a default Interceptor Stack. Since these are set up by default, and they handle most use cases, most people do not really understand the Stack, because it "Just Works".

To understand how it works, you just need to look at what happens when struts receives a new request. It starts things off by creating an object called the ActionInvocation, and calling a method called "invoke" on that object. During the first trigger, invoke looks at the configured Stack, gets the first Interceptor configured, and fires that Interceptor's "intercept" method.

Inside "intercept" is where the Interceptor performs whatever tasks it needs to get done. For example, there is an Interceptor available for use called the LoggingInterceptor. Inside of this Interceptor's "intercept" method, the first thing that it does is log out an "Entering" message.

After performing any necessary operations, the interceptor then calls ActionInvocation.invoke again. Bear in mind the call stack we have going so far:
    ActionInvocation.invoke()
        Interceptor1.intercept()
            ActionInvocation.invoke()

Back inside invoke, the ActionInvocation class has kept track of what Interceptors it has already fired. In this case, it has made note of the fact that Interceptor1 has already been called. It looks again at the Stack, and finds the Interceptor configured to fire after Interceptor1, and calls that Interceptor's "intercept" method.

This continues on and on, until we reach the last interceptor. After the last interceptor performs whatever it needs to do, it again calls ActionInvocation.invoke. ActionInvocation now recognizes that all of the Interceptors have been called, and proceed to call the Action class's execute method. So the final call stack looks something like this:
    ActionInvocation.invoke()
        Interceptor1.intercept()
            ActionInvocation.invoke()
                Interceptor2.intercept()
                    ActionInvocation.invoke()
                        .
                        .
                        .
                        InterceptorN.intercept()
                            ActionInvocation.invoke()
                                Action.execute()

This is now the part that most people are familiar with - the Action class. Your Action does what it needs to do, then returns a String value. Something like "success", "error", "input", etc. These return values end up determining what page gets displayed to the user.

This String that gets returned by your Action class - where does it go? If we look up at our call stack, we will see that it will return to ActionInvocation.invoke. Interesting. Here is the fun part - both ActionInvocation.invoke and all of the Interceptors "intercept" methods return Strings as well. In fact, they return your String.

So now we start to unwind back up the Stack. This is another very important part of the framework. Do you ever remember seeing those weird Struts2 pictures, that show all of the Interceptors being fired twice? Once before the Action, and once after? Like this one:


This call stack is how that works. The Interceptors do not actually get fired twice. They are fired once, but the Action call occurs inside of them. Back to the above mentioned LoggingInterceptor now. As I said, this Interceptor logs out an "Entering" message when it starts, then it fires "invoke". After "invoke" returns, though, this Interceptor is not done - it also logs a "Leaving" message afterward.

Here is the complete code necessary in order to log an entering and leaving message before and after every single one of your Action calls:
@Override
public String intercept(ActionInvocation invocation) throws Exception {
        logMessage(invocation, START_MESSAGE);
        String result = invocation.invoke();
        logMessage(invocation, FINISH_MESSAGE);
        return result;
}

That's it. Instead of log lines inserted at the top and bottom of every single Action class, this one little snippet set up as an Interceptor does all that work for you.

I wanted to go over how to do a custom interceptor, but I have already prattled quite a bit longer than I thought I would. Hopefully I have given you a bit better understanding on how Interceptors work, and gotten you hungry to learn how you can start writing your own. More posts will follow.

The Struts2 Framework

I'll admit, when I first started web development, I didn't like frameworks. I was a do-it-yourself kind of guy. I didn't want to have to deal with learning the ins and outs of someone else's code. Now that I have begun to use frameworks, though, I would never go back.

My current framework of choice is Struts 2. I have a lot of respect for this framework, and I feel like it does a lot for me in a really elegant way. You will probably hear me blather on about it quite a bit, as it is where I spend most of my time these days.

Struts 2 is an MVC framework, or a Model-View-Controller. MVC is a design pattern that is used to separate the parts of a web application into separate layers. A layer is an isolation of a type of code that keeps it independent of the implementation of the other layers.

The View is the part of the code that actually handles how the data is presented. In Struts 2, this is typically handled by jsp files.

The Controller is what hooks up the View with the Model. It is what controls where the flow goes, what view to display, what business logic to execute. It is kind of the brain of the operation. In Struts 2, the Action class acts as the Controller. Your Action class should not have any business logic in it directly. Rather, it should call into a Service layer that executes your business logic, then alter the flow of the View based off of what it returns.

The Model is the parts of the code that actually contain the business logic. I typically isolate this into a Service layer, which is called from the Action class.

The beauty of the MVC design pattern is that these layers are totally isolated from one another. If we want to start using freemarker templates instead of jsps, we could do that, and we wouldn't have to touch our Action class or Service layer. If we want to change our business logic, it should not affect our Action class or View layer.

There are other MVC frameworks out there. What makes Struts 2 stand out from the others? Well, to start with, it is the one that my employer wants me to use - always a good incentive. That being said, I personally still prefer Struts 2 to any of the other frameworks I have tried. My only other experience has been with Spring MVC, which seems to be a good solution as well, but made less sense to me than Struts 2 does.

Struts 2 has a lot of features, and I have barely begun to scratch the surface on all that it can do. More posts will come in the future that start talking about the Interceptor concept, the built in validation framework, and many of the other features that Struts 2 has to offer.

I leave you today with the thing that most developers always want to see in a new technology - the Hello World.

The following clips were taken from http://struts.apache.org/2.x/docs/hello-world.html.

The Action class:
package tutorial;
import com.opensymphony.xwork2.ActionSupport;
public class HelloWorld extends ActionSupport {

    public static final String MESSAGE = "Struts is up and running ...";

    public String execute() throws Exception {
        setMessage(MESSAGE);
        return SUCCESS;
    }

    private String message;

    public void setMessage(String message){
        this.message = message;
    }

    public String getMessage() {
        return message;
    }
}

The struts.xml file:
<!DOCTYPE struts PUBLIC
    "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN"
    "http://struts.apache.org/dtds/struts-2.0.dtd">
<struts>
    <package name="tutorial" extends="struts-default">
        <action name="HelloWorld" class="tutorial.HelloWorld">
            <result>/HelloWorld.jsp</result>
        </action>
        <!-- Add your actions here -->
    </package>
</struts>

The JSP file:
<%@ taglib prefix="s" uri="/struts-tags" %>

<html>
    <head>
        <title>Hello World!</title>
    </head>
    <body>
        <h2><s:property value="message" /></h2>
    </body>
</html>

Wednesday, January 26, 2011

Unit Testing

Unit testing is one of my favorite time saving techniques. I am amazed at the number of people that I talk to that don't write unit tests, because it takes too long, or it feels like writing it twice, or a handful of other reasons that I have heard.

Plain and simple, if used right, unit tests can save you a whole bunch of time. It is one of those spend-a-little-to-save-a-lot techniques. I find this to be particularly true in web development.

I get rather sick of manually testing my stuff for a web application. Each little change is a complete chore. You have to
  1. Make the change
  2. Deploy the new app
  3. Possibly restart the web server
  4. If necessary, log in
  5. Navigate to the page in question
  6. Perform activity under examination (potentially a lot of clicks involved here)
  7. Evaluate the displayed results
Not only is this time consuming, but it is also incredibly prone to error conditions. What if something didn't deploy right? What if the web server is bugging out? Maybe the displaying of the results is incorrect. What if you hit the wrong link, or typed in the wrong value, or whatever. You get the idea.

Another one I see people do sometimes is that they will write little main() functions in their code, in order to test their functionality. Guess what - that is basically what a unit test is! Only now you are adding more code, and getting less support by your IDE.

Unit tests are basically just little code snippets that most modern IDEs know how to support natively. They can be automated, they can provide reports, and better yet, they can be rerun time and time again, at just a single click of a button.

Unit tests really are not all that scary, once you get into them. Here is an example of a very simple test that will always pass:

public class TestMyClass
{
     @Test
    public void testMyClass() {
        assertTrue(1 == 1); 
    }
}

Not too bad, right? This will always tell you, every time you run the tests, that 1 does in fact equal 1. Not all that helpful, but it is a start. So, let's go with a more real-world scenario:

public class TestMailService
{
     @Test
    public void testSetForwarding() {
        MailService mailService = new MailService();
        mailService.setForwarding("fromaddress@domain.com",
            "toAddress@domain.com");


        assertTrue(
            mailService.getForwardingAddress( "fromaddress@domain.com" ).
            equals( "toAddress@domain.com" ));

    }

}

So, this is a slightly meatier example that tests a mail service that sets up forwarding. With this little snippet in place, you can determine that your mail service library is successfully setting and retrieving the forwarding address correctly. If you or somebody else does something that breaks that functionality, you do not need to experience that through your web browser in order to become aware of the problem.

This is by no means an exhaustive study on unit tests. It really only scratches the surface. If you have not used them before, though, I highly recommend you look into it. Save yourself some time, and make your coding world just that much better.

About Me

I'm Dustin Wilhelmi, and I am a software developer team lead for the University of Kansas. I live locally here in Lawrence, KS, with my wife, my 3 year old baby girl, and a dog that barks like crazy at the growing grass.

I got my CS degree from Emporia State University, which has a very small but absolutely awesome Computer Science department. The size allowed me to have a good relationship with my teachers (yes, that's right, they actually knew who I was!) and meant I had a good close relationship with pretty much every CS student there. I was trained in C++, and in my professional career have worked fairly extensively in C# and in Java, which is my current language of choice.

Now that we have the basics out of the way, we can get on with the meat. Mostly I am creating this blog to keep a record of things that I stumble on - interesting solutions to problems, solid third-party libraries, whatever. I don't even know that anybody will ever read this, but it at the least gives me a good place to keep track of my stuff in an easily searchable fashion.

So that's it. Have fun, learn something, and feel free to comment or email any questions.