Journaling Operations

Switching logsets
Selective journaling
Selective restores
Hot backup


A logset consists of 1 to 16 files. The capacity of a logset is 32GB (given that each component cannot exceed 2GB). Before the logset reaches it's capacity, a switch must be made to another logset using the jlogadmin command. Failure to do so will render journaling inoperable and may also result in database updates from jBASE programs failing.

Using 16 files in a logset does not consume more space than using just 1 file. This is because updates to the logset are striped across all the files in the logset. When journaling is active on a live system, the recommendation is to define 16 files for each logset.

At least 2 logsets must be configured (with jlogadmin) so that when the active logset nears capacity, a switch can be made to another logset. Switching to a logset causes that logset to be initialized, i.e. all files in that logset are cleared. The logset that is switched from remains intact. The usual command to switch logsets is jlogadmin -l next. If there are 4 logsets defined, this command works as follows:

Active logset before switch Active logset after switch
1 2
2 3
3 4
4 1

If a jlogdup process is running in real time to replicate to another machine, it should automatically start reading the next logset when it reaches the end of the current logset. To effect this behavior, use the parameter terminate=wait in the input specification of the jlogdup command.



The jBASE journaler does not record every update that occurs on the system. It is important to understand what is and is not automatically recorded in the transaction log.

What is journaled?
Everything that is updated through the jEDI interface unless the file has specifically been designated as non-loggable (i.e. use of the jchmod –L filename command). This includes non jBASE hash files such as directories.

Print job logging is determined by the LOGGERSIZE entry in $JBCRELEASEDIR/config/jspform_formtype file.  By default the setting in $JBCRELEASEDIR/config/jspform_deflt is LOGGERSIZE 10M which means that print jobs going to the default SP-TYPE that are 10 megabytes or smaller will be logged.

What is NOT journaled? The opposite of above, in particular:

  • Operations using non-jBASE commands such as the ‘rm’ and ‘cp’ commands , the ‘vi’ editor.
  • Index definitions.
  • Trigger definitions.
  • Remote files using jRFS via remote Q-pointers or stub files
  • When a SUBROUTINE is cataloged the resulting shared library is not logged.
  • When a PROGRAM is cataloged the resulting binary executable file is not logged.
  • Internal files used by jBASE such as jPMLWorkFile , jBASEWORK , jutil_ctrl will be set to non-logged only when they are automatically created by jBASE. If you create any of these files yourself, you should specify that they are not logged (see note on CREATE-FILE below). You may also choose to not log specific application files.

It is recommended that most application files are enabled for transaction journaling. Exceptions to this may include temporary scratch files and work files used by an application. Files can be disabled from journaling by specifying LOG=FALSE with the CREATE-FILE command or by using the -L option with the jchmod command. Journaling on a directory can also be disabled with the jchmod command. When this is done, a file called .jbase_header is created in the directory to hold the information.

Remote files are disabled for journaling by default. Individual remote files can be enabled for journaling by:

  • Using QL instead of Q in attribute 1 of the Q pointer, e.g.
  • Adding L to attribute 2 of the file stub, e.g.

In general, journaling on specific files should not be disabled for "efficiency" reasons as such measures will backfire when you can least afford it.



There may be times when a selective restore is preferable to a full restore. This cannot be automated and has to be judged on its own merits.

For example, assume you accidentally deleted it a file called CUSTOMERS. In this case you would probably want to log users off while it is restored, while certain other files may not require this measure. The mechanism to restore the CUSTOMERS file would be to selectively restore the image taken by a jbackup and then restore the updates to the file from the logger journal. For example:

# jrestore –h ‘/JBASEDATA/PROD/CUSTOMERS*’ < /dev/rmt/1
# cd /tmp
# create-file TEMPFILE TYPE=TJLOG set=current terminate=eos
[ 417 ] File TEMPFILE]D created , type = J4
[ 417 ] File TEMPFILE created , type = TJLOG
21 Records Selected
# jlogdup input set=current output set=database

If required, the jlogdup rename and renamefile options can be used to restore the data another file.



The following is a description of a Transaction Journaling installation. It describes how Transaction Journaling is logged from system nodej to a backup system nodek, i.e. a Failsafe/Hot backup configuration.

The key points are:

  • You can configure the security of the logger, however there is always a trade-off between performance and security.
  • By default it will only recover complete transactions.
  • It is very configurable and allows you to log to disk, to tape, to a remote machine or any combination of these.
  • Extensive reporting utilities.
  • Comprehensive configurable selective-restore capabilities.

The steps are:

  1. Begin transaction journaling on nodej (assuming nodej is the live machine and nodek the hot stand-by)

  2. Start a jbackup/jrestore from nodej to nodek, using the jbackup option '-s /tmp/backup.logset0', which will create a time-stamp file for later use

  3. Once (2) completes, you need to copy the log table generated by (a) from nodej to nodek. This could be achieved with:

    jlogdup -u10 input set=eldest start=/tmp/backup.logset0 terminate=wait output set=stdout | rsh nodek /GLOBALS/JSCRIPTS/logrestore

    See below for a dissection of this command. This command uses the script called /GLOBALS/JSCRIPTS/logrestore:

    (or whatever it is for your usual users)
    jlogdup input set=stdin output set=database

  4. Monitor the status of the jlogdup by running jlogstatus from a dedicated window:
    jlogstatus -r5 -a

  5. Optionally, run jlogstatus on nodek to ensure the log is being restored correctly.

  6. You will need to configure more than one set of log files. You will start logging to say set 1, and at some point will switch over to log set 2. This will usually be done daily just before each jbackup to tape. Then on the third day, start another jbackup to tape and re-use log set 1.

  7. Monitor the jlogstatus display to ensure that the log sets don't fill the disk!   You can configure transaction journaling to perform certain actions when the log disks begin to fill past a configurable water-shed. In the event of a failure you have a full up-to-date set of disks on nodek you can switch over to.


jlogdup command

jlogdup -u10 input set=eldest start=/tmp/backup.logset0 terminate=wait output set=stdout | rsh nodek /GLOBALS/JSCRIPTS/logrestore

Component Description
-u10 Display * every 10 updates, a sort of minor verbose mode
input start the definition of the input definition
set=eldest start the jlogdup at the eldest defined update in all the sets
start=/tmp/backup.logset0 when the jbackup was done the -s/tmp/backup.logset0 was used to time-stamp; the updates therefore will only begin from when the jbackup was started
terminate=wait don't terminate jlogdup when it exhausts the log set. Instead keep waiting forever for new entries; use the 'jlogadmin -k' option to terminate cleanly
output start the definition of the output definition
set=stdout output is to the terminal or pipe
rsh nodek /GLOBALS/JSCRIPTS/logrestore use the standard UNIX remote shell capability to restore the log data passed through the pipe/stdin onto the nodek system, using the previously configured restore script logrestore


Transaction Journaling