High Availability configuration

If you were using LIXA software project in a mission critical environment, you should set-up an high availability configuration. If you take a look to Figure 3.1, “Typical LIXA topology” you will notice the lixad daemon is a single point of failure: if the system hosting lixad crashed, the Application Program could not perform distributed transaction processing due to the unavailability of lixad daemon.

To avoid the issues related to single point of failure pathology the suggested configuration is Active/passive as described in Wikipedia High-availability cluster page. You can use:

lixad requires that the filesystem and the block device support mmap(), munmap() and msync() functions. The faster the filesystem/block device you are using, the better the lixad performance.

The easiest high-availability configuration uses:

The following pictures show an high availability configuration in action:

Figure 10.1. HA, step 1: the active node is on the left, the passive one is on the right

HA, step 1: the active node is on the left, the passive one is on the right


Figure 10.2. HA, step 2: the active node fails

HA, step 2: the active node fails


Figure 10.3. HA, step 3: the passive node takes over the service

HA, step 3: the passive node takes over the service


Note

If you put all the LIXA installation files (/opt/lixa) in the shared disk you will not be able to run LIXA utility program from the node that's not owning the shared disk: this should not be an issue if your active/passive cluster hosts only lixad.

If you were using a most complicated configuration, it might be preferable to put only /opt/lixa/var and /opt/lixa/etc in the shared disk. You could implement such a configuration using symbolic links or customizing the configure procedure with --sysconfdir=DIR and --localstatedir=DIR options. Use ./configure --help for more details.