Wednesday, February 11, 2009

Tuning PostgreSQL on Linux

I've found an interisting documentation page which applies to our storage production environment: it's about kernel resources.

In brief, on a Linux box we can face three problems:
  • System V IPC Parameters

    The default maximum segment size is 32 MB, which is only adequate for small PostgreSQL installations. However, the remaining defaults are quite generously sized, and usually do not require changes. The maximum shared memory segment size can be changed via the sysctl interface. For example, to allow 128 MB, and explicitly set the maximum total shared memory size to 2097152 pages (the default):

    <samp class="PROMPT">$ sysctl -w kernel.shmmax=134217728
    $ sysctl -w kernel.shmall=2097152

    In addition these settings can be saved between reboots in /etc/sysctl.conf.

    Older distributions might not have the sysctl program, but equivalent changes can be made by manipulating the /proc file system:

    $ echo 134217728 >/proc/sys/kernel/shmmax
    $ echo 2097152 >/proc/sys/kernel/shmall
  • Memory Overcommit

    In Linux 2.4 and later, the default virtual memory behavior is not optimal for PostgreSQL. Because of the way that the kernel implements memory overcommit, the kernel might terminate the PostgreSQL server (the master server process) if the memory demands of another process cause the system to run out of virtual memory.

    If this happens, you will see a kernel message that looks like this (consult your system documentation and configuration on where to look for such a message):

    Out of Memory: Killed process 12345 (postgres). 

    This indicates that the postgres process has been terminated due to memory pressure. Although existing database connections will continue to function normally, no new connections will be accepted. To recover, PostgreSQL will need to be restarted.

    One way to avoid this problem is to run PostgreSQL on a machine where you can be sure that other processes will not run the machine out of memory. If memory is tight, increasing the swap space of the operating system can help avoiding the problem, because the out-of-memory (OOM) killer is invoked whenever physical memory and swap space are exhausted.

    On Linux 2.6 and later, an additional measure is to modify the kernel's behavior so that it will not "overcommit" memory. Although this setting will not prevent the OOM killer from being invoked altogether, it will lower the chances significantly and will therefore lead to more robust system behavior. This is done by selecting strict overcommit mode via sysctl:

    sysctl -w vm.overcommit_memory=2

    or placing an equivalent entry in /etc/sysctl.conf. You might also wish to modify the related setting vm.overcommit_ratio. For details see the kernel documentation file Documentation/vm/overcommit-accounting.

    Some vendors' Linux 2.4 kernels are reported to have early versions of the 2.6 overcommit sysctl parameter. However, setting vm.overcommit_memory to 2 on a kernel that does not have the relevant code will make things worse not better. It is recommended that you inspect the actual kernel source code (see the function vm_enough_memory in the file mm/mmap.c) to verify what is supported in your copy before you try this in a 2.4 installation. The presence of the overcommit-accounting documentation file should not be taken as evidence that the feature is there. If in any doubt, consult a kernel expert or your kernel vendor.

  • Resource Limits

    Unix-like operating systems enforce various kinds of resource limits that might interfere with the operation of your PostgreSQL server. Of particular importance are limits on the number of processes per user, the number of open files per process, and the amount of memory available to each process. Each of these have a "hard" and a "soft" limit. The soft limit is what actually counts but it can be changed by the user up to the hard limit. The hard limit can only be changed by the root user. The system call setrlimit is responsible for setting these parameters. The shell's built-in command ulimit (Bourne shells) or limit (csh) is used to control the resource limits from the command line. On BSD-derived systems the file /etc/login.conf controls the various resource limits set during login. See the operating system documentation for details. The relevant parameters are maxproc, openfiles, and datasize. For example:

    default:\
    ...
    :datasize-cur=256M:\
    :maxproc-cur=256:\
    :openfiles-cur=256:\
    ...

    (-cur is the soft limit. Append -max to set the hard limit.)

    Kernels can also have system-wide limits on some resources.

    On Linux /proc/sys/fs/file-max determines the maximum number of open files that the kernel will support. It can be changed by writing a different number into the file or by adding an assignment in /etc/sysctl.conf. The maximum limit of files per process is fixed at the time the kernel is compiled; see /usr/src/linux/Documentation/proc.txt for more information.
    The PostgreSQL server uses one process per connection so you should provide for at least as many processes as allowed connections, in addition to what you need for the rest of your system. This is usually not a problem but if you run several servers on one machine things might get tight.

    The factory default limit on open files is often set to "socially friendly" values that allow many users to coexist on a machine without using an inappropriate fraction of the system resources. If you run many servers on a machine this is perhaps what you want, but on dedicated servers you might want to raise this limit.
    On the other side of the coin, some systems allow individual processes to open large numbers of files; if more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happening, and you do not want to alter the system-wide limit, you can set PostgreSQL's max_files_per_process configuration parameter to limit the consumption of open files.

Tuesday, February 3, 2009

JBoss and PermGen OutOfMemoryError

The "PermGen" error happens, when the Java virtual machine runs out of memory in the permanent generation. Recall that Java has a generational garbage collector, with four generations: eden, young, old and permanent.

In the eden generation, objects are very short lived and garbage collection is swift and often.

The young generation consists of objects that survived the eden generation (or was pushed down to young because the eden generation was full at the time of allocation), garbage collection in the young generation is less frequent but still happens at quite regular intervals (provided that your application actually does something and allocates objects every now and then).

The old generation, well, you figured it. It contains objects that survived the young generation, or have been pushed down, and garbage collection is even less infrequent but can still happen.

And finally, the permanent generation. This is for objects that the virtual machine has decided to endorse with eternal life - which is precicely the core of the problem. Objects in the permanent generation are never garbage collected; that is, under normal circumstances when the jvm is started with normal command line parameters.

So what happens when you redeploy your web application is, that your WAR file is unpacked and its class files loaded into the jvm. And here's the thing: almost always ends up in the permanent generation... Because, seriously, who wants to garbage collect their classes?!? Well, apparently application servers do, and here's how we make that happen for JBoss, but the same configuration is applicable to other application servers, adding the following parameters to the bin/run.conf file at JAVA_OPTS line:

-XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=128m


Consider eventually tuning the MaxPermSize=128m part to fit your needs...

JBoss and multiple environments

Here at SmartLab we use four environments during the software life-cycle, each with it's own characteristics:
  • the development environment is the one running on each development computer and allows each developer to write and test it's own code in a non-shared environment without worring about concurrency or conflicting changes;
  • the integration-test, also know as test, environment is the first opportunity for multiple developers and development teams to integrate the different parts into a single solution and this environment usually respect the architectural principles of the project but it can be limited by any factor;
  • the demo environment is the last developers-accessible environment and it fully respects all the architectural choices made for the system, in addition this environment should provide some sort of access from the outer world to allow for pre-release revisions;
  • the production environment is where the system is deployed for public access.
The preceding environments are listed in ascending order of importance, security needs and computational power; each one runs an application server which needs to be configured in a proper way to fit the environment specific needs.

File logging is configured:
  • development - at a trace level and without rotation or append
  • test - at a debug level without rotation but with append
  • demo - at an info level with rotation and append
  • production - at an info level with rotation, append and backup
Console logging is configured:
  • development - at a debug level
  • test - at a warn level
  • demo - at a warn level
  • production - at an error level (used only to ensure startup ha been performed correctly)
Email loggin is configured:
  • development and test - none
  • demo - error level messages are sent to developers
  • production - error level messages are sent to the project leader immediately, warnings are sent on a per day basis to developers
Administration console security is configured:
  • development - no protection
  • test,demo and production - password protected
  • demo and production - ciphered protocol
File permissions are set to:
  • development - no protection
  • test and demo - stiky bit and readwrite permissions on %devel for deplyment folders, logs and temporary dirs
  • production - stiky bit and readwrite permissions on %manager for deplyment folders,