LARGE FILES

Man Page Index


Normally jBASE hash files can grow up to 2GB, which is the usual file limit, found on most Unix systems. However for cases where user files are greater than 2GB then jBASE large files should be used. The jBASE HASH4 file type support large files, by either using one or more system files or using raw unmounted disk partitions.

Space allocation can be done in two ways, either a use a number of regular system files or use an unmounted disk partition. Partitions are only applicable to Unix systems and should be created by the system administrator. The space allocated is used by all HASH4 files, which are flagged as "large".

Before selecting large files it may be beneficial to compare them with the functionality of Distributed Files.

 

CREATE LARGE FILE CONFIGURATION FILE.

The configuration file for large file support should be placed in the "config" subdirectory of the jBASE release directory and named "jediLargeFileConfig". All lines must be begin with either a comment indicator, "#", or the keywords "filemin" or "disk".

e.g.
jediLargeFileConfig
#
# Unix Large file partition area
#
filemin=100000
disk=/dev/dsk/c0b0t0d0p2
disk=/dev/dsk/c1b0t4d0p2

or

jediLargeFileConfig
#
# Unix Large file area
#
filemin=100000
disk=/dev/largearea/jBaseLargeFile size=2048

or

jediLargeFileConfig
#
# NT Large file area
#
filemin=100000
disk=E:\JBASE_LARGE_FILE size=1024
disk=F:\JBASE_LARGE_FILE size=1024

The "filemin" value is used by the CREATE-FILE command to determine when a HASH4 should be created as a "large" file or regular file. The "filemin" value specifies the modulo break point above which large files will be created. The "size" value specifies the maximum size in megabytes of the file for this volume.  The range of the "size" value can be from 1 to 2048, or 2 gigabytes. A "size" larger that 2048 will produce unpredictable results when the disk partitions are created with the JLFILE command.

 

INITIALIZE LARGE FILE AREA.

Before large files can be created the large file area must be made accessible to all users who are expected to access these files. The large file area should then be initialized with the "jlfile" command. This command should be executed as the root user.

 

JLFILE

jlfile -Options

Option Explanation
-b black hole check all the defined files
-c correct any black-holed files (must be root)
-d display status of the large file partitions
-f file display of all large files.
-i initialize large file partitions
-o over-ride the "Are you sure" prompts
-pnn{-mm} display pages nn {to mm}
-v verbose display
-H hole fill for regular files

Basically, you would use the -i option to initialize the large file partitions, -d option to display some status information about the large file partitions, and the -p option to display one or more pages of large file partitions for debugging purposes.

When you initialize the large file partitions, if you used regular files instead of disk partitions, as will always be the case for Windows/NT systems, then the -H option is advisable. This will cause the disk space specified with the "size=nn" operand to be written to disk as null values and will ensure this disk space actually exists, is permanently allocated, and can provide performance gains.

If a number of files become "black-holed", this means the file "stub" has been deleted from the file system but the space remains on the disk. This can occur, for example, if the "rm" command is used to remove a file instead of "delete-file", or on Windows/NT if deleted through, for example, the Windows explorer. These errors can be detected using the "-b" option to jlfile. The lost space can be corrected with the "-c" option. When using the -b and -c options, the -o option can also be specified to override the "are you sure to delete this file" message, and should only be used with extreme caution.

 

Example Unix

As root user

export JBCRELEASEDIR=/usr/jbc
export JBCGLOBALDIR=/usr/jbc
export LD_LIBRARY_PATH=/usr/jbc/lib:/usr/ccs/lib
export PATH=$PATH:/usr/jbc/bin
chmod a+rw /dev/dsk/c0b0t0d0p2
chmod a+rw /dev/dsk/c1b0t4d0p2
jlfile -i
jlfile: Warning: The following partitions will be initialized
ALL DATA CURRENTLY ON THE DEVICES WILL BE LOST !!
/dev/dsk/c0b0t0d0p2: 262144 pages
/dev/dsk/c1b0t4d0p2: 419584 pages

Enter Y to confirm continuation : Y
jlfile: 2 disks initialized successfully

Example NT

As administrator:

Set PATH=%PATH%;%JBCRELEASEDIR%\bin
jlfile -i -H
jlfile: Warning: The following partitions will be initialized
ALL DATA CURRENTLY ON THE DEVICES WILL BE LOST !!
E:\JBASE_LARGE_FILE: 262144 pages
F:\JBASE_LARGE_FILE: 262144 pages

Enter Y to confirm continuation : Y
jlfile: 2 disks initialized successfully

 

CONFIGURE CREATE-FILE TO USE LARGE FILES.

By default jBASE will continue to create HASH4 files as regular files, in order to invoke large file support you need to configure the CREATE-FILE command to use the allocated large file area instead of regular files.

In the configuration file the "filemin" value should be set to the break point modulo above which files will be created as large files and use the allocated large file area. This value can be overridden in two ways, either by specify LARGEFILE=NO on the create file command line or by setting the JEDI_LARGEFILE environment variable to a different break point modulo.

Example 1
JEDI_LARGEFILE=10
export JEDI_LARGEFILE
create-file PIPE 1,1 29,1
[ 417 ] File PIPE]D created , type = J4
[ 417 ] File PIPE created, type = J4, base page number 117364

Example 2
create-file PIPE 1,1 97,1 LARGEFILE=YES
[ 417 ] File PIPE]D created , type = J4, base page number 20766
[ 417 ] File PIPE created, type = J4, base page number 129407

Example 3
JEDI_CREATEFILE="LARGEFILE=TRUE"
export JEDI_CREATEFILE
create-file PIPE 1,1 29,1
[ 417 ] File PIPE]D created , type = J4, base page number 91131
[ 417 ] File PIPE created, type = J4, base page number 117364

DELETE-FILE extension
The DELETE-FILE command has been extended to allow a recursive option, which can be used to recursively delete additional files.

DELETE-FILE -r $HOME
This command will delete all files in the $HOME directory.

DELETE-FILE MYFILE* (R
This command will delete all files beginning with the string "MYFILE".

 

LARGE FILE BACKUP AND DELETION

When a large file is created, a very small "stub" file is also created. This is the "stub" into the large file partitions. This allows for backwards compatibility with existing applications.

You will need to use jbackup or account-save to save the database. Do not use the operating system commands such as cpio or tar, as these will only backup the "stub" file and not backup the large file partitions.

During a complete database restore you would need to use the "jlfile -i" command to re-initialize your large file partitions before using jrestore or account-restore.

To delete an entire account, you would use a combination of both "delete-file -r $HOME" to delete all the jBASE files (and thus all the space in the large disk partition) followed by "rm -rf $HOME/*" to delete all the Unix files. On NT you would clean up using Windows Explorer.

Movement or deletion errors can be detected using the -b option to the jlfile command, and can be corrected with the -bc options to the jlfile command. However errors should not happen if you use normal jBASE utilities instead of operating system commands.

 

CHOOSING LARGE FILES

In general, the regular file system should be used. However, there may be reasons for choosing large file partitions.

The general advantages of using large files are:

  • Can be faster in some applications.
  • The files can break the 2Gb barrier and may be up to 512 Gbyte.

The general disadvantages of using large files are:

  • Problems normally associated with legacy operating systems previously noted such as deleting a file while another user has it open, black-holing and losing data space.
  • If for some reason the overflow table becomes corrupt, then this affects all large files. Using regular files, each file has its own overflow table and so any corruption only applies to a single file. This may not be much of a consolation as often a single corrupted file requires an entire database restore anyway.
  • More systems administration effort required, especially if it is necessary to resize the large file partitions.
  • The disk space reserved for jBASE large file partitions can only effectively be used by jBASE application. Normal applications will not be able to use this disk space, whereas on a normal file system, both jBASE applications and non-jBASE applications can compete for disk space.

 


FILE