Thursday, September 2, 2010

Subversion and "Could not authenticate to server"

Yesterday I was trying to release the latest stable version of the dbUnit project through Maven and I lost some time trying to solve a stupid problem with Subversion: everytime I ran the mvn release:prepare command I got an error saying I wasn't able to authenticate to the Sourceforge SVN server.
As I was previously more than able to release the project through this same exact procedure I think the very source of the problem was the upgrade of the Subversion client. This operation infact made the project checkout directory incompatible with the command line client I was using since my last release and I decided to upgrade to the latest 1.6 version of Subversion CLI.

Everything was working fine but I forgot something: suversion command line tools caches the user credentials and the Maven release goal is performed in an unattend fashion!

If you encounter such a problem I suggest you issue the following command providing your credentials when prompted:

svn lock

In my case the command was:

svn lock https://dbunit.svn.sourceforge.net/svnroot/dbunit/trunk/dbunit/pom.xml

This should prompt you for credentials which will be cached by the SVN client.

Do not forget to unlock the file issuing the unlock command, or none else will be able to commit on that file anymore!

svn unlock

Sunday, August 29, 2010

Development Environment

This is the set of tools available to the development team, all configured for authentication against the corporate LDAP:

  • Artifactory is the maven repository mirror and corporate artifact repository;
  • Subversion behind Apache Web Server serves as source code management;
  • Apache Continuum behind the usual Apache Web Server runs continous integration build and testing using the projects Maven and Ant configurations;
  • Redmine revealed itself as the perfect solution for project issue and time tracking with the addition of internal documentation;
  • source code analysis, code test coverage and code quality in general is available through Sonar;
  • performance issues are found through the HypericHQ monitoring system, whose reports are available to the system administrators too;
  • Eclipse is the choosen IDE supported by this minimum plugins set.

Monday, May 31, 2010

java.util.Calendar and last, not exactly, day of month


Consider the following code :

Calendar calendar = Calendar.getInstance();
calendar.set(Calendar.YEAR, 2009);
calendar.set(Calendar.MONTH, Calendar.FEBRUARY);
calendar.set(Calendar.DAY_OF_MONTH, calendar.getActualMaximum(Calendar.DAY_OF_MONTH));
return calendar.getTime();

What's strange or wrong with this? This code looks correct at first look and expected
result should be 28 Feb 2009. Unfortunately it's not always so!!

Suppose to run the above code on 31-May-2009 at 12:00 AM, the result will be 3 Mar 2009!

The reason have to be found on lenient Calendar mechanism. This Calendar property, infact, by default is set to true and doesn't throw any kind of Exception when time interpretation is not correct.

This is javadoc about it:


When a Calendar is lenient, it accepts a wider range of field values than it produces. For example, a lenient GregorianCalendar interprets MONTH == JANUARY, DAY_OF_MONTH == 32 as February 1. A non-lenient GregorianCalendar throws an exception when given out-of-range field settings. When calendars recompute field values for return by get(), they normalize them. For example, a GregorianCalendar always produces DAY_OF_MONTH values between 1 and the length of the month.
It means that,in our case, if we really want to get the last day of month, we have to
write this simple code:


Calendar calendar = Calendar.getInstance();
calendar.set(Calendar.YEAR, 2009);
calendar.set(Calendar.MONTH, Calendar.FEBRUARY);
calendar.set(Calendar.DAY_OF_MONTH, 1);
calendar.set(Calendar.DAY_OF_MONTH, calendar.getActualMaximum(Calendar.DAY_OF_MONTH));
return calendar.getTime();

and however, it's not a so bad idea, sometimes, to set lenient property to false and getting an IllegalArgumentsException, always better then abnormal runtime behavior.

Hope it can be helpful.

Tuesday, May 25, 2010

Five reasons to hate DTOs

I finally came to it: I hate Data Transfer Objects.


I can't explain why a lot of people still stick with this unconfortable pattern and most of them continue to ignore the cons of this choice. Now I'll try to explain my opinion by listing why I think this should be enlisted as anti-pattern.

  1. whenever you have to return or receive an object you must always copy it using almost double heap memory than normal (yeah, I know it's not really double, but it is something near it)
  2. if you should return or receive a complex structure you have two choices: deep copy the structure or use multiple interactions, in both cases you are loosing heap space and processing time
  3. almost any change to the business model interfaces will be reflected on the transfer objects AND on the code which maps the two (the latter does not apply in case you use introspection which is slower and lesser customizable) more than doubling the maintenance time and is error prone
  4. programmers tend to confuse the difference between model objects and DTOs adding utility methods to the former and business logic to the latter
  5. if in certain situations you need additional info on the client side from a returned DTO you have two choices: embed a service call into the DTO (which hides the complexity but expose to a performance hit as your users don't know they are starting another interaction) or call another service to obtain the additional infos (which adds complexity to your service interface)
Now let explain my way to replace DTOs with interfaces:
  1. if you need to return or receive almost all the informations stored in your business model object just do it, return or receive your business model object
  2. whenever you need to return a DTO, may be to reduce the informations providen by hiding some properties/methods, return an interface which your business model object will implement
  3. whenever you need to receive a DTO you have two choices: use a business model object ancestor or use a business model object component; the choice depends on your business model design, if you are used to build by composition or inheritance
  4. when you need to return a complex structure just initialize the structure before returning it (this is needed to avoid lazy initialization errors)
Four simple rules to avoid class and memory duplication with a lot of processing time spent copying datas from one object to another.

Do you see situations where this solution is not applicable? Let me know, I'm sure I can find a non DTO based solution.

Wednesday, April 14, 2010

Password meter

It's a good and recent practice to place a password strenght meter on registration forms, something like the one depicted below.




To achieve such a result could be very easy adapting something freely available on the web, like the one provided here.

Friday, February 12, 2010

Test Environment: OpenSSO + JBoss + WSO2 ESB + Liferay

Today I'm trying to setup a test environment for an architecture we thought could solve some project problems.

The architecture is the following:
  • JBoss 4.2 or 5.1 (the choice is delayed)
  • OpenSSO 8
  • WSO2 ESB
  • Liferay 5.2
As first start I installed OpenSSO 8 Enterprise (the Express version is not working for me under any environment I tried: Tomcat 5.5, Tomcet 6, JBoss 4.2, JBoss 5.1, Glassfish 2.1) on top of Tomcat 6.
The installation is easy, just deploy the opensso.war inside the tomcat/webapps folder, giving Tomcat one gigabyte of memory (add JAVA_OPTS=-Xmx1024m in catalina.sh) and a fully qualified domain name to the host running tomcat (sso.smartlab.net alias for 127.0.0.1 in /etc/hosts). On first access to the http://sso.smartlab.net/opensso url (it's very important you use the fully qualified domain name on your first access as it's used for configuration) I simply ran the Default Configuration (suggested for test environments only) which requires just two passwords: the amAdmin credentials will be used to access the administration console while the amAgent credentials will be used .

After I installed the OpenSSO policy agent on top of JBoss 4.2. First of all you need to create the J2EE policy agent profile in OpenSSO. To perform this you have to access the OpenSSO administration console (username amAdmin, password the one you specified during initial configuration) and follow the official instructions replacing the informations providen there with your test environment infos; mine were:
  • Name : JBoss
  • Server URL : http://sso.smartlab.net:8080/opensso
  • Agent URL : http://test.smartlab.net:8180/opensso-agent
Please note the Agent URL has a different name and port: the name is resolved through /etc/host to 127.0.0.1 (but MUST share the same domain with the Server URL or additional configuration is needed) while the port MUST be different because I'm running both servers on the same machine.

I'm just performing an initial test of the architecture so I'm cloning the server/default folder of my JBoss 4.2 installation to server/sso, cleaning it up from previous work and editing the deploy/jboss-web.deployer/server.xml to switch the connector ports to 8180 (HTTP) and 8109 (AJP).

I unzipped the JBoss Policy Agent 3.0 package (unpacked in /opt/jboss/opensso removing the messing directory structure j2ee_agents/jboss_v42_agent) then I created a file with the agent password

$> echo "agent password" > /opt/jboss/opensso/agent.pwd

then I ran the bin/agentadmin script using this informations:
  • JBoss Server Config Directory : /opt/jboss/server/sso/conf
  • JBoss Server Home Directory : /opt/jboss
  • OpenSSO server URL : http://sso.smartlab.net:8080/opensso
  • Agent URL : http://test.smartlab.net:8180/opensso-agent
  • Agent Profile name : JBoss
  • Agent Profile Password file name : /opt/jboss/opensso/agent.pwd
  • Agent permissions gets added to java permissions policy file : false
Upon procedure completion some files were added to my server/sso JBoss instance, but I had to:
  • rename the deploy/agentapp.war file to deploy/opensso-agent.war because I used a non standard name;
  • change the jboss/bin/run.sh script because the suggested procedure to add the agent classpath wasn't good for my environment; I used this script excerpt in place of the suggested one (please note that this excerpt need you to change the first line of run.sh from #!/bin/sh to #!/bin/bash.

I then ran my JBoss test instance with /opt/jboss/bin/run.sh -c sso and everything seems working: I tested it trying to access the http://test.smartlab.net:8180/opensso-agent application being redirected to the OpenSSO login page on http://sso.smartlab.net/opensso.

The last test was about securing the JBoss JMX Console through OpenSSO. The activity required me to:
  • add this snippet to the deploy/jmx-console.war/WEB-INF/web.xml file
  • add this snippet to the deploy/jmx-console.war/WEB-INF/jboss-web.xml file
Upon completion the same OpenSSO login page should be displayed before trying to access the http://test.smartlab.net:8180/jmx-console url.

Ok, then let's try to log into the JBoss JMX Console, but with which credentials?!? On my first try I used the OpenSSO Administration Console superuser credentials (amAdmin/adminadmin) but I encountered a redirection loop failure thus discovering my setup wasn't ready yet. Googling a little bit I discovered this error can be simply solved adding an addition parameter for the JVM to the Tomcat configuration: JAVA_OPTS="$JAVA_OPTS -Dcom.iplanet.am.cookie.c66Encode=true".

Solved the problem and going back to the JMX Console I got a 403 (resource forbidden) error and after some investigations I discovered the easiest solution was to tell OpenSSO to simply apply a limited policy of type SSO_ONLY (Access Control > Top Level Realm > Agents > J2EE > JBoss > General add a jmx-console=SSO_ONLY map entry).

In the near future I wish to try the usage of the OpenID 2 standard on OpenSSO, I've found some instructions on another blog but I hadn't the time to investigate yet.

Friday, February 5, 2010

Redmine Installation on Ubuntu 9.04

I decided to give a try to Redmine 0.9.1 and decided to test it on my own notebook running Ubuntu 9.04 (64 bit). As a Ruby newby I had a few issues that's why I'm going to post here my experience. By the way: Redmine is running fine on my notebook authenticating users against my corporate OpenLDAP!

First of all I installed the gem and ruby packages from the Ubuntu repos:

sudo apt-get install rubygems ruby

I decided to perform te remaining installation steps from gem (which, by the way, is a good tool to install ruby packages, something like apt):

sudo gem install rails
sudo gem install rake
sudo gem install rack -v=1.0.1

By default Redmine runs on top of mySQL, but I prefer PostgreSQL as RDBMS so I followed the Redmine wiki instructions to configure PostgreSQL as backend.

sudo gem install pg

Here I got the first problem as a native library I haven't installed on my PC was required, but the outputted message was unclear: something regarding a missing pg_config parameter or command.
After some search I discovered pg_config is a command line utility available through the Ubuntu repositories, so the problem is easily solved running:

sudo apt-get install libpq-dev

Now the previous installation command should finish properly and you can continue with the instructions available on the Redmine wiki.

Once started the WEBrick server I started playing with the web application but I encountered another problem: the OpenLDAP integration. I entered all the parameters in the fields and get a succesfult connection test but I was unable to log into the system with OpenLDAP accounts: I discovered the problem was I entered too much informations in the LDAP Authentication definition!

Strange but solving: in the Redmine LDAP Authentication definition page you MUST NOT insert any credentials (I was erroneusly populating those fields with LDAP administrator credentials) but leave those fields blank and voilĂ , LDAP integration works!