Monday, November 16, 2009

JBoss 4 on CentOS 5

Following the step by step instructions to have JBoss 4 starting on boot on a CentOS 5 server.

1. create a jboss user with the command
useradd --system -d /your/jboss/root/dir jboss

2. copy the init script already available in the jboss distribution into the /etc/init.d folder with the command
cp /your/jboss/root/dir/bin/jboss_init_redhat.sh /etc/init.d/jboss

3. alter the /etc/init.d/jboss file to match with the CentOS 5 SELinux distribution feature changing the line
  SUBIT="su - $JBOSS_USER -c "
to the equivalent SELinux of su
  SUBIT="runuser - $JBOSS_USER -c "

4. ensure the jboss user is capable of read and writing all the files in it's home folder executing the command
chown jboss.jboss /your/jboss/root/dir -Rf
chmod u+rw
/your/jboss/root/dir -Rf

5. (optional) ensure the deployers are capable of read and writing all the files in the jboss server dirs
chown jboss.devel /your/jboss/root/dir/server -Rf
chmod g+rws
/your/jboss/root/dir/server/ -Rf

6. (optional) ensure the jboss server is listening on the correct address specifing the -b option on startup changing the /etc/init.d/jboss script adding the bolded line (the non bolded line is placed as a positional reference):
JBOSS_HOME=${JBOSS_HOME:-"/usr/local/jboss"}
JBOSS_HOST=0.0.0.0



Thursday, October 15, 2009

EJB 2.x maximum performances and flexibility: abstract from Remote vs Local

I think the following should be a best practice whenever you should implement EJB 2.x specifications as this will allow to transparently switch between local and remote interfaces without any change.

In EJB 2.x you need to write the following classes/interfaces to support both remote and local deployment:
  • public class MyComponentBean implements javax.ejb.SessionBean
  • public interface MyComponentRemoteHome extends javax.ejb.EJBRemoteHome
  • public interface MyComponentRemote extends javax.ejb.EJBObject
  • public interface MyComponentLocalHome extends javax.ejb. EJBLocalHome
  • public interface MyComponentLocal extends javax.ejb.EJBLocalObject
If you want to be able to switch between local and remote deployment you need to change every place where you retrieve the EJB, usually this action is concentrated in a ServiceLocator implementation like:

public class MyComponentServiceLocator {
public final static String MY_COMPONENT_LOCATION = "ejb/myComponent";

public static MyComponentLocal getLocal(Properties properties) throws Exception {
       InitialContext context = new InitialContext(properties);
       MyComponentLocalHome home = (MyComponentRemoteHome)context.lookup(MY_COMPONENT_LOCATION + "/local");
       return home.create();
   }
   public static MyComponentRemote getRemote(Properties properties) throws Exception {
       InitialContext context = new InitialContext(properties);
       MyComponentRemoteHome home = (MyComponentRemoteHome)context.lookup(MY_COMPONENT_LOCATION + "/remote");
       return home.create();
   }
}


With this approach you can switch from local to remote just switching from MyComponentServiceLocator.getLocal(...) to MyComponentServiceLocator.getRemote(...) on every place you need to switch, but in addition you need to switch the type you declared for the variable to which you are going to assign the MyComponentServiceLocator call result: from MyComponentLocal to MyComponentRemote.

In addition you need to manually track down all interfaces are exposing the same methods.
Wouldn't it easier if we can have some sort of automatic check and avoid the need to switch the code? Couldn't be possible to switch between local and remote at deployment time without any change at compile time?

Well, the answer is in the following structure:
  • public interface MyComponent
    declares all shared functional methods, each method will throws java.rmi.RemoteException in addition to any exception it should normally throw
  • public interface MyComponentHome
    declares all shared creation methods throwing both java.rmi.RemoteException and javax.ejb.CreateException

  • public class MyComponentBean implements javax.ejb.SessionBean, MyComponentRemote, MyComponentLocal
    the MyComponentRemote and MyComponentLocal interfaces have been added here to ensure the implementation class provides code for all the declared methods: be careful, all methods must not throw java.rmiRemoteException
  • public interface MyComponentRemoteHome extends javax.ejb.EJBRemoteHome, MyComponentHome
    the MyComponentHome interface has been added here to ensure all shared creation methods are supported by the remote home: if all the remote creation methods are shared than this interface will be completely empty!
  • public interface MyComponentRemote extends javax.ejb.EJBObject, MyComponent
    the MyComponent interface has been added here to ensure all the shared methods are supported through the remote interface: if all the remote methods are shared than this interface will be completely empty!
  • public interface MyComponentLocalHome extends javax.ejb. EJBLocalHome, MyComponentHome
    the MyComponentHome interface has been added here to ensure all shared creation methods are supported by the local home: all methods defined in the MyComponentHome interface must be overriden here to remove the java.rmi.RemoteException declaration; if you forget this step your application server should warn you when you deploy this EJB.
  • public interface MyComponentLocal extends javax.ejb.EJBLocalObject, MyComponent
    the MyComponent interface has been added here to ensure all shared methods are supported through the local interface: all methods defined in the MyComponent interface must be overriden here to remove the java.rmi.RemoteException declaration; if you forget this step your application server should warn you when you deploy this EJB.
This is harder to say than to put in practice and the advantages are:
  • one place for shared creation methods: if you add a method to MyComponentHome interface you automatically get it on the remote home and if you forget to override it in the local home (to remove the java.rmi.RemoteException) your application server will warn you on your first deployment;
  • one place for shared functional methods: if you add a method to MyComponent interface you automatically get it on the remote interface and if you forget to override it in the local interface (to remove the java.rmi.RemoteException) your application server will warn you on your first deployment;
  • your clients will no more have to deal with remote or local differences as they will use the MyComponent interface (unless they need some methods not available on both interfaces)
  • you can still produce different interfaces for local and remote deployments;
  • your implementation will always implement the required methods;
  • you can switch between local and remote deployment using the ejb-ref directive (in your web.xml or in your ejb.xml)
  • you can have a ServiceLocator like the following one which completely masks the remote vs local

public class MyComponentServiceLocator {
   public final static String MY_COMPONENT_LOCATION = "ejb/myComponent";
   public static MyComponent get(Properties properties) throws NamingException, CreateException, RemoteException {
        InitialContext context = new InitialContext(properties);
       MyComponentHome home = (MyComponentHome)context.lookup("java:comp/env/" + MY_COMPONENT_LOCATION);
       return home.create();
   }
}


If you don't want to deal with the ejb-ref at all you can consider the following ServiceLocator implementation which allows any deployment combination and automatically uses the local interface if available (with lesser performances as two JNDI lookups are performed in the worst case)

public class MyComponentServiceLocator {
   public final static String MY_COMPONENT_LOCATION = "ejb/myComponent";
   public static MyComponent get(Properties properties) throws NamingException, CreateException, RemoteException {
       try {
            return MyComponentServiceLocator.getLocal(properties);
       } catch (Exception e) {
            return MyComponentServiceLocator.getRemote(properties);
       }
   }
   public static MyComponentLocal getLocal(Properties properties) throws NamingException, CreateException {
       InitialContext context = new InitialContext(properties);
       MyComponentLocalHome home = (MyComponentRemoteHome)context.lookup(MY_COMPONENT_LOCATION + "/local");
       return home.create();
   }
   public static MyComponentRemote getRemote(Properties properties) throws NamingException, CreateException, RemoteException {
       InitialContext context = new InitialContext(properties);
       MyComponentRemoteHome home = (MyComponentRemoteHome)context.lookup(MY_COMPONENT_LOCATION + "/remote");
       return home.create();
   }
}

Be careful, the last solution can produce unwanted exception traces in your application server when the local lookup fails: those exceptions are normal unless produced by the remote lookup. Those unwanted exceptions can bring you mad when you try to understand why your EJB is not working.

Tuesday, October 6, 2009

JBoss Production Environment

In my humble opinion a JBoss production environment should be something like the one depicted in the following diagram



I recommend to use mod_proxy, mod_proxy_balancer and mod_proxy_ajp apache modules both for load balancing and request forwarding with directives like:

ProxyRequests Off
<Proxy balancer://webapp-cluster>
Order deny,allow
Allow from all
BalancerMember ajp://instance1:8009/webapp-name loadfactor=1
BalancerMember ajp://instance2:8009/webapp-name loadfactor=1
ProxySet lbmethod=bytraffic
</Proxy>
ProxyPass /webapp-name balancer://webapp-cluster
ProxyPassReverse /webapp-name balancer://webapp-cluster
System dimensions and load can vary the numbers, but the architecture should be sufficient and enough scalable for many situations.

Monday, October 5, 2009

Java Serialization & final class attributes

Today I had to face a problem with Java Serialization and here is the report of what I've achieved.
The SmartWeb BusinessObject class defines a protected attribute named logger carrying the logger for subclasses. The BusinessObject class implements Serializable thus it needs to define the logger attribute as transient because Commons Logging loggers are non serializable.

The problem arises whenever you deserialize a BusinessObject subclass because the logger attribute will not be deserialized (it has not be serialized at all!) and this makes all your logging statements producing NullPointerExceptions! BTW, those errors are very difficult to understand for two reasons:
  1. you always consider that attribute valid and you will hardly consider tha logger attribute to be null
  2. every logging statement you try to add to your code to understand what's going wrong will fail on it's own
Well, the solution to the problem is re initialize the logger attribute upon object deserialization implementing a custom readObject method as stated in the Serializable interface documentation:
private void readObject(java.io.ObjectInputStream in)
throws IOException, ClassNotFoundException;
The preceeding code is not going to work in my specific case because the logger attribute has been declared as final to avoid unwanted replacements and potential errors. The first option I took in consideration was "ok, I've no exit, let's make that attribute non final" but the idea was suddenly replaced by "but standard Java Serialization is normally able to deserialize final fields... how?" and I googled and digged a little bit into the problem ending to the following solution:
<br />	/**<br />	 * Custom deserialization. We need to re-initialize a logger instance as loggers<br />	 * can't be serialized.<br />	 */<br />	private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {<br />		try {<br />			Class type = BusinessObject.class;<br />			// use getDeclaredField as the field is non public<br />			Field logger = type.getDeclaredField("logger");<br />			// make the field non final<br />			logger.setAccessible(true);<br />			logger.set(this, LogFactory.getLog(type));<br />			// make the field final again<br />			logger.setAccessible(false);<br />		} catch (Exception e) {<br />			LogFactory.getLog(this.getClass())<br />				.warn("unable to recover the logger after deserialization: logging statements will cause null pointer exceptions", e);<br />		}<br />		in.defaultReadObject();<br />	}<br />


Wednesday, September 30, 2009

Subversion + LDAP read only and read write


Image via Wikipedia
Here follows the Apache configuration I found working to set a repository read-only and read-write permissions. Consider read-write permissions are given by adding a user to both read-allowed and write-allowed LDAP groups, while read-only permissions are given through the read-allowed LDAP group.


  AuthType basic
  AuthBasicProvider ldap
  AuthBasicAuthoritative On
  AuthName "SmartLab Directory Server"
  AuthLDAPURL ldap://ldap/ou=people,dc=smartlab,dc=net
  AuthLDAPGroupAttributeIsDN off
  AuthLDAPGroupAttribute memberUid
 
    Require ldap-group cn=read-allowed,ou=groups,dc=smartlab,dc=net
 

 
    Require ldap-group cn=write-allowed,ou=groups,dc=smartlab,dc=net
 




Reblog this post [with Zemanta]

Thursday, September 24, 2009

Subversion merge after refactor

I recently discovered the popular Subversion VCS has a problem to apply differences to moved files. Unfortunately I discovered this after a big and time consuming refactor process so I had to find out an "easy-to-apply" solution to this problem.


Image via Wikipedia


My repository layout is something like:


project
+-- trunk
+-- branches
+-- refactoring


The target operation is to merge the refactoring branch onto the trunk, but to avoid blocking all other people I decide to perform the inverse: apply all updates on the trunk to the refactoring branch. On merge completion I'm going to switch the twos.

First of all I started merging the trunk onto the branch ignoring all the "Skipped missing target" messages which whould occur for each file you moved/renamed: in my case this was happening for 99% percent of the files, as it was a massive refactor, and only downloading added files (which you may still need to review).

svn merge -r 3855:HEAD http://svn.smartlab.net/project/trunk


Then I moved on the trunk working copy and performed a huge diff starting from the revision in which I created the branch and stopping to the HEAD revision:


svn diff -r 3855:HEAD . > rev.3855-HEAD.diff

Next step was to open the diff file both in eclipse (right click on the checkout folder, Team/Apply patch...) and within a text editor (I used gedit, but vi or notepad++ should get you to the same results): I used eclipse to easily find unmatched entries (shown with a red cross) and applied a textual search & replace on the diff file in the text editor.

At the end of my process I had a diff file I could use to patch my branch to have it up to date against the trunk, with a few missing/unappliable patches against those files I had removed definitely from my branch.

The process was easy, but needed time and the results depends on the accuracy you perform it. But at least all my work has not been void!

BTW: if you are not mass refactorying you can use the Eclipse support to select an unmatched diff entry and move it to another file, this can be really usefull if you have moved only a few files.


Thursday, September 3, 2009

RTMPT on Tomat + Red5 as a war

I'm recently developing a Flex application and, as an Open Source addicted, I choosen to use Red5 as streaming server.

The network architecture actually is a Tomcat 6 servlet container with some wars deployed within, one of which is the Red5 streaming server.

To ensure my application could be used through internet the default RTMP protocol is not the best choice as some firewalls blocks its port, so I opted for the RTMP over HTTP, also known as RTMPT (notice the final additional T) which is able to tunnel RTMP inside HTTP.

I found a bit of confusion when I googled to configure my Red5 war so I'm going to report here the simple four steps I did to make my configuration working.


  1. Open your WEB-INF/web.xml file and add the RTMPT servlet definition and mappings (if RTMP is going to be tunneled inside HTTP we need an HTTP endpoint able to forward packets)

    <servlet>
    <servlet-name>rtmpt</servlet-name>
    <servlet-class>org.red5.server.net.rtmpt.RTMPTServlet</servlet-class>
    <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
    <servlet-name>rtmpt</servlet-name>
    <url-pattern>/fcs/*</url-pattern>
    </servlet-mapping>

    <servlet-mapping>
    <servlet-name>rtmpt</servlet-name>
    <url-pattern>/open/*</url-pattern>
    </servlet-mapping>

    <servlet-mapping>
    <servlet-name>rtmpt</servlet-name>
    <url-pattern>/close/*</url-pattern>
    </servlet-mapping>

    <servlet-mapping>
    <servlet-name>rtmpt</servlet-name>
    <url-pattern>/send/*</url-pattern>
    </servlet-mapping>

    <servlet-mapping>
    <servlet-name>rtmpt</servlet-name>
    <url-pattern>/idle/*</url-pattern>
    </servlet-mapping>


  2. Inside your WEB/lib you should have a Red5 jar (well, this is not my case as I use Maven for build, but Maven users will understand what I mean, right?) and you need to open it up and edit the red5.properties file it contains:

    http.port = 8080


  3. Open your Tomcat 6 folder and edit the file conf/server.xml adding, if needed, an HTTP/1.1 connector for the port you want to use for RTMPT (the default port is 80, but you can set it accordingly to your needs):

    <!-- RTMPT connector redirecting to your HTTP port -->
    <Connector port="8088" protocol="HTTP/1.1"
    maxThreads="150" connectionTimeout="20000"
    redirectPort="8080" />



  4. The last and very bothering part is you need to have your streaming server war binded to the root context, so you can simply rename it to ROOT.war
---

Friday, July 24, 2009

OpenSSL CA-Infrastructure

Generate your own private key and make sure none will ever get access to your private key:
openssl genrsa -des3 -out private.key 2048

If you need your public key outside of a certificate issue this command:
openssl rsa -in private.key -pubout -out public.key

To generate a certificate request for your key:
openssl req -new -key private.key -out certificate.csr

Now you should send your certificate request ONLY to the certification authority; someone, on the other side will view your request:
openssl req -text -noout -in certificate.csr

and then will decide to sign your request sending a valid certificate
openssl x509 -days 365 -in certificate.csr -out certificate.crt -sha1 -CA ca.crt -CAkey ca.key -req -extfile user.ext

Friday, March 20, 2009

HSQLDB No such table Exception

I've encountered a strange problem using HSQLDB which became totally weird when using that database in conjunction with Hibernate formulas. Here is the problem and the specific issue.

I've a table named group (lower case) and a table named property  (lower case) in a schema named auth (lower case too, for naming convention) and I want to create them both on HSQLDB. I know group is a reserved word in SQL so I've created my DDL statements accordingly:
    create schema auth authorization DBA;

    create table auth."group" (
        "id" bigint generated by default as identity (start with 1),
        "description" longvarchar,
        primary key ("id")
    );

    create table auth."property" (
        id bigint generated by default as identity (start with 1),
        handler varchar(255),
        primary key (id)
    );
As you can see I've double quoted the structure element names in table group to avoid the reserved word problem (I could limit myself to the table name, but this doesn't make any difference) and I've used the same notation for the table property name too (not needed but this clarifies my example). Now I wish to query that database with a query like
select * from auth."group"
which correctly executes and returns the results, but a query like
select * from auth.property
fails with a No such table exception !?!

Well the problem is HSQLDB converts all identifiers to upper case unless you use the double quote notation!!!! The query then should be issued as
select * from auth."property"
If you query the database meta data you can see the problem in the auth schema name: it's real name is AUTH, all uppercase letters!

The problem here is HSQLDB is case sensitive but implicitly converts all your table names and column names to upper case! Yes the problem occur on column names too, in fact the following query fails with a No such column exception:
select "id" from auth."property"
Thats because the id column was implicitly renamed to ID... sigh!

Ok, this is a problem, but it's still not a great problem, you just use double quotes consistently through all your project (I had no choice to use double quotes everywhere) and you can forget the problem just treating HSQLDB as a case sensitive database.

If you wish to use Hibernate to query such a database you have to use the special single quote character ` (sorry, I haven't found a better name for it) instead of double quotes inside your HBMs to let Hibernate substitute the ` char with the " char (to avoid XML issues).

Well, still no unresolvable problem until now, but if you want to write an Hibernate formula property... BANG! With an Hibernate formula property in fact you can write your own SQL statement which will be executed to populate that property, but you can't use nor double quotes nor the ` char to escape a column name there! Well the last statement is not completely true as you can use the ` tinstead of the double quotes, but in this case you can use only fields of the table your class is mapped onto... which makes formulas quite unuseful.

I'm actually trying to help the Hibernate developers to solve the problem... I'll update this post if I found a solution as Hibernate user or developer.

Wednesday, March 4, 2009

Reduce a JNLP application size

I've found (with suggestions from my friend Matteo Croce) a simple and easy way to drastically reduce the size of Java client application:

  • pack all your classes and relative dependencies into a single jar using maven-shade-plugin (http://maven.apache.org/plugins/maven-shade-plugin/) or other similar ways
  • use Proguard (http://proguard.sourceforge.net/) to include into your uber-jar only those classes really used into your application (OPTIONAL)
  • Use standard Java 5 additional compression method Pack200 (http://java.sun.com/j2se/1.5.0/docs/guide/deployment/deployment-guide/pack200.html)
You can reduce an uber jar of 10 MB to 0,5 MB without many problems !!!

Wednesday, February 11, 2009

Tuning PostgreSQL on Linux

I've found an interisting documentation page which applies to our storage production environment: it's about kernel resources.

In brief, on a Linux box we can face three problems:
  • System V IPC Parameters

    The default maximum segment size is 32 MB, which is only adequate for small PostgreSQL installations. However, the remaining defaults are quite generously sized, and usually do not require changes. The maximum shared memory segment size can be changed via the sysctl interface. For example, to allow 128 MB, and explicitly set the maximum total shared memory size to 2097152 pages (the default):

    <samp class="PROMPT">$ sysctl -w kernel.shmmax=134217728
    $ sysctl -w kernel.shmall=2097152

    In addition these settings can be saved between reboots in /etc/sysctl.conf.

    Older distributions might not have the sysctl program, but equivalent changes can be made by manipulating the /proc file system:

    $ echo 134217728 >/proc/sys/kernel/shmmax
    $ echo 2097152 >/proc/sys/kernel/shmall
  • Memory Overcommit

    In Linux 2.4 and later, the default virtual memory behavior is not optimal for PostgreSQL. Because of the way that the kernel implements memory overcommit, the kernel might terminate the PostgreSQL server (the master server process) if the memory demands of another process cause the system to run out of virtual memory.

    If this happens, you will see a kernel message that looks like this (consult your system documentation and configuration on where to look for such a message):

    Out of Memory: Killed process 12345 (postgres). 

    This indicates that the postgres process has been terminated due to memory pressure. Although existing database connections will continue to function normally, no new connections will be accepted. To recover, PostgreSQL will need to be restarted.

    One way to avoid this problem is to run PostgreSQL on a machine where you can be sure that other processes will not run the machine out of memory. If memory is tight, increasing the swap space of the operating system can help avoiding the problem, because the out-of-memory (OOM) killer is invoked whenever physical memory and swap space are exhausted.

    On Linux 2.6 and later, an additional measure is to modify the kernel's behavior so that it will not "overcommit" memory. Although this setting will not prevent the OOM killer from being invoked altogether, it will lower the chances significantly and will therefore lead to more robust system behavior. This is done by selecting strict overcommit mode via sysctl:

    sysctl -w vm.overcommit_memory=2

    or placing an equivalent entry in /etc/sysctl.conf. You might also wish to modify the related setting vm.overcommit_ratio. For details see the kernel documentation file Documentation/vm/overcommit-accounting.

    Some vendors' Linux 2.4 kernels are reported to have early versions of the 2.6 overcommit sysctl parameter. However, setting vm.overcommit_memory to 2 on a kernel that does not have the relevant code will make things worse not better. It is recommended that you inspect the actual kernel source code (see the function vm_enough_memory in the file mm/mmap.c) to verify what is supported in your copy before you try this in a 2.4 installation. The presence of the overcommit-accounting documentation file should not be taken as evidence that the feature is there. If in any doubt, consult a kernel expert or your kernel vendor.

  • Resource Limits

    Unix-like operating systems enforce various kinds of resource limits that might interfere with the operation of your PostgreSQL server. Of particular importance are limits on the number of processes per user, the number of open files per process, and the amount of memory available to each process. Each of these have a "hard" and a "soft" limit. The soft limit is what actually counts but it can be changed by the user up to the hard limit. The hard limit can only be changed by the root user. The system call setrlimit is responsible for setting these parameters. The shell's built-in command ulimit (Bourne shells) or limit (csh) is used to control the resource limits from the command line. On BSD-derived systems the file /etc/login.conf controls the various resource limits set during login. See the operating system documentation for details. The relevant parameters are maxproc, openfiles, and datasize. For example:

    default:\
    ...
    :datasize-cur=256M:\
    :maxproc-cur=256:\
    :openfiles-cur=256:\
    ...

    (-cur is the soft limit. Append -max to set the hard limit.)

    Kernels can also have system-wide limits on some resources.

    On Linux /proc/sys/fs/file-max determines the maximum number of open files that the kernel will support. It can be changed by writing a different number into the file or by adding an assignment in /etc/sysctl.conf. The maximum limit of files per process is fixed at the time the kernel is compiled; see /usr/src/linux/Documentation/proc.txt for more information.
    The PostgreSQL server uses one process per connection so you should provide for at least as many processes as allowed connections, in addition to what you need for the rest of your system. This is usually not a problem but if you run several servers on one machine things might get tight.

    The factory default limit on open files is often set to "socially friendly" values that allow many users to coexist on a machine without using an inappropriate fraction of the system resources. If you run many servers on a machine this is perhaps what you want, but on dedicated servers you might want to raise this limit.
    On the other side of the coin, some systems allow individual processes to open large numbers of files; if more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the system-wide limit, you can set PostgreSQL's max_files_per_process configuration parameter to limit the consumption of open files.

Tuesday, February 3, 2009

JBoss and PermGen OutOfMemoryError

The "PermGen" error happens, when the Java virtual machine runs out of memory in the permanent generation. Recall that Java has a generational garbage collector, with four generations: eden, young, old and permanent.

In the eden generation, objects are very short lived and garbage collection is swift and often.

The young generation consists of objects that survived the eden generation (or was pushed down to young because the eden generation was full at the time of allocation), garbage collection in the young generation is less frequent but still happens at quite regular intervals (provided that your application actually does something and allocates objects every now and then).

The old generation, well, you figured it. It contains objects that survived the young generation, or have been pushed down, and garbage collection is even less infrequent but can still happen.

And finally, the permanent generation. This is for objects that the virtual machine has decided to endorse with eternal life - which is precicely the core of the problem. Objects in the permanent generation are never garbage collected; that is, under normal circumstances when the jvm is started with normal command line parameters.

So what happens when you redeploy your web application is, that your WAR file is unpacked and its class files loaded into the jvm. And here's the thing: almost always ends up in the permanent generation... Because, seriously, who wants to garbage collect their classes?!? Well, apparently application servers do, and here's how we make that happen for JBoss, but the same configuration is applicable to other application servers, adding the following parameters to the bin/run.conf file at JAVA_OPTS line:

-XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=128m


Consider eventually tuning the MaxPermSize=128m part to fit your needs...

JBoss and multiple environments

Here at SmartLab we use four environments during the software life-cycle, each with it's own characteristics:
  • the development environment is the one running on each development computer and allows each developer to write and test it's own code in a non-shared environment without worring about concurrency or conflicting changes;
  • the integration-test, also know as test, environment is the first opportunity for multiple developers and development teams to integrate the different parts into a single solution and this environment usually respect the architectural principles of the project but it can be limited by any factor;
  • the demo environment is the last developers-accessible environment and it fully respects all the architectural choices made for the system, in addition this environment should provide some sort of access from the outer world to allow for pre-release revisions;
  • the production environment is where the system is deployed for public access.
The preceding environments are listed in ascending order of importance, security needs and computational power; each one runs an application server which needs to be configured in a proper way to fit the environment specific needs.

File logging is configured:
  • development - at a trace level and without rotation or append
  • test - at a debug level without rotation but with append
  • demo - at an info level with rotation and append
  • production - at an info level with rotation, append and backup
Console logging is configured:
  • development - at a debug level
  • test - at a warn level
  • demo - at a warn level
  • production - at an error level (used only to ensure startup ha been performed correctly)
Email loggin is configured:
  • development and test - none
  • demo - error level messages are sent to developers
  • production - error level messages are sent to the project leader immediately, warnings are sent on a per day basis to developers
Administration console security is configured:
  • development - no protection
  • test,demo and production - password protected
  • demo and production - ciphered protocol
File permissions are set to:
  • development - no protection
  • test and demo - stiky bit and readwrite permissions on %devel for deplyment folders, logs and temporary dirs
  • production - stiky bit and readwrite permissions on %manager for deplyment folders,

Friday, January 23, 2009

Dreamweaver CS3 on Crossover Linux Pro 7.1.0 [NOT WORKING]

Install your licensed Dreamweaver CS3 into a Windows XP host (you can use a virtualized machine for this) and run it at least once (to correctly register the product and verify the serial code), then prepare to move some files and registry keys from your Windows XP installation to your Linux box:
  • C:\Program Files\Adobe must go into ~/.cxoffice/your-bottle/drive_c/Program Files/
  • C:\Program Files\Common Files\Adobe must go into ~/.cxoffice/your-bottle/drive_c/Program Files/Common Files/
  • C:\Windows\Macromed must go into ~/.cxoffice/your-bottle/drive_c/Windows/
  • C:\Windows\WinSxs must go into ~/.cxoffice/your-bottle/drive_c/Windows/
Now you have to export your entire HKEY_LOCAL_MACHINE/Software/Macromedia/ registry key to a file and copy it into your Linux box. Now you have to run recode ucs-2..ascii exported.reg and import the recoded registry file into your-bottle registry.

You should now be able to run your Dreamweaver CS3 issuing the following command on the command line:
/opt/cxoffice/bin/wine --bottle your-bottle ~/.cxoffice/your-bottle/drive_c/Program Files/Adobe/Adobe Dreamweaver CS3/Dreamweaver.exe

Thursday, January 22, 2009

Dual boot: AUTOCHK program not found

Today I faced a strange behavior while booting in my Windows Vista partition having a black screen showing the message "AUTOCHK program not found" followed by a BSOD and a quick reboot.

The problem was not resolved by a common bootrec /fixmbr command issued on the Vista Recovery Console and I could not solve it using grub too (btw, my system is in dual boot with Ubuntu Intrepid Ibex).

After googling a little bit I've found some instructions suggesting to unhide the Vista partition, but suddenly I discovered my Vista partition was not hidden, instead it's partition type was unknow!!!

The output of sudo fdisk -l infact was reporting a type unknow for my /dev/sda2 (my Vista partition), consequently I opened up the Gnome Partition Editor, selected my Vista Partition and changed the partition flags adding the hidden flag, suddenly the fdisk output changed reporting an Hidden HPFS/NTFS partition. Reopening the Gnome Partition Editor and removing the hidden flag restored my partition type to 7 HPFS/NTFS.

For sake of completness my partition and mbr mangling was due to my attempt to configure my Dell XPS M1330 power on buttons as this notebook has two of them: one to boot into the system and a second one to boot into Windows Media Direct. My attempt was to use the normal power up button to boot into Ubuntu and the second one to boot into Vista. More on the topic will follow if I'll resolve the boot problems ;)

Thursday, January 15, 2009

GNU/Linux and FAT32

I've found an interesting post about FAT32 filesystem handling under Ubuntu GNU/Linux...