LISTSERV at Work L-Soft
Issue 3, 2011

   Tech Tip: LISTSERV


Q: How can I adjust the amount of memory available to LISTSERV on my Linux system?

As with any application, LISTSERV must be allocated sufficient memory space in order to operate. When the available operating system memory is exhausted, applications can no longer operate normally. By far, the most common cause of LISTSERV application crashes on the Linux platform is memory limitation. Operations like indexing the notebooks of lists with especially large archives require a great deal of memory, and a failure of the operating system to allocate that memory can result in a crash. Such crashes often look something like this in the LISTSERV log:

3 Feb 2011 00:01:31 Reindexing SAMPLE LOG1102A...
>>> Error 8 from HeapAlloc() for 188773920 bytes <<<

The HeapAlloc() is a dead giveaway – LISTSERV requested a memory allocation from the OS, and the allocation request failed. A crash is sure to follow.

After a crash, we might run gdb against the core file, and see something like this:

(gdb) bt
#0 0x08096ef3 in Alloc ()

Again, the Alloc tells us that the process failed during a memory allocation request.

The Solution

There are two ways to solve memory allocation problems: decrease the amount of memory requested (demand), or increase the amount of memory available (supply). A previous Tech Tip (Issue 2, 2007) discussed how to manage LISTSERV notebook files to reduce the amount of memory requested by LISTSERV. In this Tech Tip, we'll discuss how to manage the supply side of the equation.

If what we're exhausting is the amount of real and virtual memory in the machine, there's not much we can do but add RAM. The Linux vmstat command gives us a snapshot of available resources:

# vmstat 2
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0     76 1727744 117980 362372    0    0    11    22    1    0  1  0 98  0  0
 0  0     76 1727744 117980 362372    0    0     0     0 1015   63  0  0 100  0  0
 0  0     76 1727744 117980 362372    0    0     0     0 1017   63  0  0 100  0  0
 0  0     76 1727992 117984 362368    0    0     0    20 1014   70  0  0 100  0  0
 0  0     76 1727992 117984 362372    0    0     0     0 1015   63  0  0 100  0  0

The first line of output is a summary line of the stats since the system was last restarted. Following that, we get real-time updates at the specified interval (in this case, two seconds). For memory measurement, we're interested in the swpd, free, buff, and cache counters. The free counter tells us how much memory is free, buff is buffered memory, and cache is the file cache. When free memory is exhausted, we start swapping to virtual memory, the swpd counter increases, and our system performance suffers. When we run out of swap space, we're out of memory, and allocation requests start to fail. We can use the vmstat command to start monitoring and then force a reindex of our LISTSERV lists with LISTSERV's REINDEX * command to see if we're exhausting the available system memory.

But what if we aren't? What if we see plenty of RAM to spare, but we still get memory allocation failures from LISTSERV? For the answer to that, we need to look at user and process quotas.

Quotas

Linux has a system of user quotas that may limit the access of memory-related resources for users, groups or processes. For a quick peek at the quotas for an individual user, we can run the ulimit command from the bash command shell as that user:

[listserv@sample ~]$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 60927
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

We won't go into each of these limits here, but several of them could cause application crashes if reached. The most obvious ones to check are max memory size and virtual memory. However, excessively limiting the data seg size and stack size can also result in memory allocation errors.

We can change some of these settings from the running shell with the ulimit command:

[listserv@usa ~]$ ulimit -c unlimited
[listserv@usa ~]$ ulimit -a
core file size          (blocks, -c) unlimited
...

But those changes won't survive a reboot. To make them permanent (on most Linux distributions), we need to add them to the .bashrc or .profile for the 'listserv' user.

Finally, on many UNIX-like systems, it is possible for a system administrator to define user, group or process limits in /etc/security/limits.conf. If using ulimit doesn't solve the problem, you should check the limits.conf file to see if there are security limits set there that may govern the 'listserv' user or 'lsv' processes.


Subscribe to LISTSERV at Work.


© L-Soft 2011. All Rights Reserved.





Powered by LISTSERV Maestro

Subscribe to This Newsletter