Preparing to Install Caché
Before you install Caché, read the following sections:
Caché Installation Planning Considerations
Read the following Caché installation planning considerations that apply to your installation:
Installing the Atelier Development Environment
Atelier is the Eclipse-based development environment for Caché.
Atelier is available as a separate download in addition to Caché or Ensemble. You can choose to install either a stand-alone Rich Client Platform (RCP) application, or a plug-in that can be added to an existing Eclipse installation. Users of the RCP application can add additional Eclipse plug-ins. Atelier uses the Eclipse auto-update mechanism to help users get the latest changes. For information on downloading Atelier and for the Atelier documentation, see http://www.intersystems.com/atelierOpens in a new tab, the Atelier home page.
Caché Installation Directory
Throughout the Caché documentation, the directory in which a Caché instance is installed is referred to as install-dir. This directory varies by platform, installation type, and user choice, as shown in the following table:
Platform |
Installation Type |
Directory |
---|---|---|
Windows |
attended |
C:\InterSystems\Cache (or CacheN when multiple instances exist) unless installing user specifies otherwise. |
unattended | C:\InterSystems\Cache (or CacheN when multiple instances exist) unless INSTALLDIR property specifies otherwise. | |
UNIX®, Linux |
attended |
Installing user must specify. Do not choose the /home directory, or any of its subdirectories. |
unattended |
ISC_PACKAGE_INSTALLDIR parameter required. |
|
Linux |
RPM |
Single instance installation, always /usr/cachesys. |
The installation directory of a Caché instance cannot be changed following installation.
Installation Directory Restrictions
You cannot install Caché into a destination directory that has any of the following characteristics:
-
It is a UNC (non-local) path.
-
It is at the root level of a drive (such as C:\).
-
It is anywhere under the \Program Files directory.
-
It has a caret (^) in the pathname.
-
It has a character that is not in the US ASCII character set.
Caché Character Width
You must select either 8-bit or Unicode support for your installation:
-
8-bit — The software handles characters in an 8-bit format.
-
Unicode — The software handles characters in the Unicode (16-bit) format. Select Unicode if your application uses languages that store data in a Unicode format, such as Japanese.
InterSystems recommends 8-bit character support for locales based upon the Latin-1 character set, ISO 8859–1. Use Unicode if the base character set for your locale is not Latin-1, or if you anticipate handling data from locales based upon a different character set. If you use an 8-bit version of Caché, your data is not portable to 8-bit locales based on a different character set.
If you choose a Unicode installation, you cannot revert to an 8-bit version without potential data loss. This is because an 8-bit version of Caché cannot retrieve 16-bit character data from a database.
If you are installing Ensemble, you must select Unicode.
Disk Space Requirements
For every platform, the installation kit must be available, either on your computer or on a network. Specific disk space requirements for each platform are:
-
Windows:
-
A Caché installation that includes support for Caché Server Pages (CSP) uses approximately 1500 MB (megabytes) of disk storage (not including disk space for user data).
-
Any system that can effectively support Windows should be sufficiently powerful to run Caché. Caché performance greatly improves with increased processor and disk speed.
-
-
UNIX®, Linux, macOS:
-
A standard Caché installation that includes support for Caché Server Pages (CSP) needs 1600 – 1950 MB (megabytes) of disk space depending on the type of installation you choose.
-
In addition, 200 MB of space is required in the Caché installation directory. The installation procedure confirms that this disk space is available in the specified location before installing.
-
Supported Platforms and Components
For a list of operating systems platforms on which this version of Caché is supported, see the online InterSystems Supported PlatformsOpens in a new tab document for this release.
For a list of web servers on which InterSystems CSP technology is supported, see “Supported Web Servers” in the “Supported Technologies” chapter of the online InterSystems Supported PlatformsOpens in a new tab document for this release.
If you are using CSP, install the web server before installing Caché to let the Caché installer configure the web server automatically. See the “CSP Architecture” chapter of the Using Caché Server Pages guide for more information.
Caché Private Web Server
With each instance, Caché installs a private web server and a private CSP Gateway to serve CSP pages to ensure proper operation of the Management Portal and Caché Online Documentation.
The private web server is installed to ensure that:
-
The Management Portal runs out of the box.
-
An out-of-the-box testing capability is provided for development environments.
The private web server is not supported for any other purpose.
For deployments of http-based applications, including CSP, Zen, and SOAP over http or https, you should not use the private web server for any application other than the Management Portal; instead, you must install and deploy one of the supported web servers (see “Supported Web Servers” in the online InterSystems Supported PlatformsOpens in a new tab document for this release).
-
Windows: Its Windows service name is “Web Server for instname” where instname is the instance name you enter when you install Caché. Caché installs the web server into the install-dir\httpd directory, where install-dir is the Caché installation directory. It is uninstalled when you uninstall the corresponding Caché instance.
The private web server configuration is preserved through upgrades.
If You Are Upgrading Caché
If you are performing an upgrade, first read and perform all necessary procedures described in the “Upgrading Caché” chapter of this book.
When upgrading, back up your old Caché installation after completing all the pre-installation upgrade tasks and before installing Caché.
Configuring Third-Party Software
InterSystems products often run alongside and interact with non-InterSystems tools. For important information about the effects these interactions can have, see the appendix “Configuring Third-Party Software to Work in Conjunction with InterSystems Products” in the Caché System Administration Guide.
Managing Caché Memory
There are two primary ways that you can configure the way a Caché instance uses memory, described in the following sections:
The first action, allocating memory for routine and database caches, determines memory available to hold code and data. The second action, configuring gmheap, determines memory available for all other purposes. These, taken both separately and together, are important factors in the performance and functioning of the instance.
Two other memory settings are described in the section:
For an in-depth look at Caché memory planning by an InterSystems senior technology architect, see InterSystems Data Platforms and Performance Part 4 - Looking at MemoryOpens in a new tab on InterSystems Developer Community.
See platform-specific sections in this book for other information on allocating memory.
If you change settings described in this section, click Save to save your modifications; restart Caché to activate them.
Allocating Memory for Routine and Database Caches
To allocate memory for routine and database caches,
-
On the Management Portal, navigate to the Memory and Startup page (System Administration > Configuration > System Configuration > Memory and Startup).
-
Select Manually.
When Caché is first installed, memory for routine and database caches is set, by default, to be Automatically allocated. With this default, Caché allocates a conservative fraction of the available physical memory for the database cache, not to exceed 1 GB. This setting is not appropriate for production use. Before deploying the system for production use or before performing any tests or benchmarking intended to simulate production use, you will change this setting to Manually and allocate sufficient memory for your routine and database caches as described in this section.
Allocating Memory for the Routine Cache
Memory Allocated for Routine Cache (MB) — The routine cache specifies the system memory allocated for caching server code.
Caché takes the total amount of memory allocated for routine cache and creates buffers of different sizes according to this formula: It assigns half the total space to 64 KB buffers, three-eighths of the space for 16 KB buffers, and one-eighth of the space for 4 KB buffers. These groups of buffers of a certain size are sometimes called pools.
The maximum number of buffers that Caché allocates to any pool is 65,529. Caché also has a minimum number that it allocates. Caché never allocates fewer than 205 buffers to any pool. This means that the actual memory used for routine buffers (205 of each buffer size) can be larger than the amount specified in the configuration file. The format for Caché routines does not allow more than 32,768 characters for literal strings regardless of the setting for the maximum routine size.
For information about allocating memory for routine buffers using the Caché parameter file (cache.cpf), see routines in the “[Config]” section of the Caché Parameter File Reference.
If you are configuring a large ECP system, allocate at least 50 MB of 8 KB buffers for ECP control structures in addition to the 8 KB buffers required to serve your 8 KB blocks over ECP. See the Memory Use on Large ECP Systems section of the “Developing Distributed Applications” chapter of the Caché Distributed Data Management Guide for details.
Allocating Memory for the Database Cache
Memory Allocated for [blocksize] Database Cache (MB) — The database cache specifies the system memory allocated for buffering data; this is also called creating global buffers. The database cache and the memory allocated to it are sometimes referred to as the global buffer pool.
Enter a separate allocation for each enabled database block size listed. The 8K block size is required and is listed by default. To enable more database block sizes (16K, 32K, 64K), use the DBSizesAllowed setting on the Startup Settings page (System Administration > Additional Settings > Startup). See DBSizesAllowed in the Caché Additional Configuration Settings Reference for more information.
Both block size and the maximum number of buffers available have implications for performance. To determine how many global buffers Caché will create for databases with a particular block size, divide the allocation for a block size by the block size; the smaller the block size, the larger the number of global buffers that will be created for databases with that block size. See “Large Block Size Considerations” in the chapter “Configuring Caché” in the book Caché System Administration for guidelines for selecting the appropriate block sizes for your applications.
Configuring Generic Memory Heap (gmheap)
You can configure gmheap on the Advanced Memory page (System Administration > Configuration > Additional Settings > Advanced Memory).
gmheap — The generic memory heap (also known as the shared memory heap) determines the memory available to Caché for purposes other than the routine and database caches.
To see details of used and available memory for gmheap, use the Shared Memory Heap Usage page (System Operation > System Usage page; click the Shared Memory Heap Usage link).
For more information, see gmheap in the “Advanced Memory Settings” section of the Caché Additional Configuration Settings Reference and also Generic (Shared) Memory Heap Usage in the “Monitoring Caché Using the Management Portal” chapter of the Caché Monitoring Guide.
Other Caché Memory Settings
Other memory settings that you can change on the Memory and Startup page are:
-
Maximum per Process Memory (KB) — The maximum memory allocation for a process for this Caché instance. The default is 262144 KB. The allowed range is 128 KB to 2147483647 KB.
Note:It is not necessary to reset this value unless you have set it lower than its default (262144 KB). If you receive <STORE> errors, increase the size.
This amount of process private memory, which is used for symbol table allocation and various other memory requirements (for example I/O device access structures and buffers), is allocated in increasing extents as required by the application until the maximum is reached. The initial allocation is 128 KB. Once this memory is allocated to the process, it is not deallocated until the process exits.
-
If you select the Enable Long Strings check box, Caché allocates a large string stack to handle long strings for each process.
File System and Storage Configuration Recommendations
This section provides general recommendations in the following areas:
In addition, database configuration recommendations are outlined in Configuring Databases section of the “Configuring Caché” chapter of the Caché System Administration Guide.
File System Recommendations
In the interests of performance and recoverability, InterSystems recommends a minimum of four separate file systems for Caché, to host the following:
-
Installation files, executables, and system databases (including, by default, the write image journal, or WIJ, file)
-
Database files (and optionally the WIJ)
-
Primary journal directory
-
Alternate journal directory
In addition, you can add another separate file system to the configuration for the WIJ file which, by default, is created in the install—dir\mgr\ directory. Ensure that such a file system has enough space to allow the WIJ to grow to its maximum size—that is, the size of the database cache as allocated on the Memory and Startup page (System Administration > Configuration > System Configuration > Memory and Startup) (see Memory and Startup Settings in the “Configuring Caché” chapter of the Caché System Administration Guide). For more information on the WIJ, see the “Write Image Journal” chapter of the Caché Data Integrity Guide.
On UNIX®, Linux, and macOS platforms, /usr/local/etc/cachesys is the Caché registry directory and therefore must be on a local filesystem.
In the event of a catastrophic disk failure that damages database files, the journal files are a key element in recovering from backup. Therefore, you should place the primary and alternate journal directories on storage devices that are separate from the devices used by database files and the WIJ. (Journals should be separated from the WIJ because damage to the WIJ could compromise database integrity.) Since the alternate journal device allows journaling to continue after an error on the primary journal device, the primary and alternate journal directories should also be on devices separate from each other. For practical reasons, these different devices may be different logical units (LUNs) on the same storage array; the general rule is the more separation the better, with separate sets of physical drives highly recommended. See Journaling Best Practices in the “Journaling” chapter of the Caché Data Integrity Guide for more information about separate journal storage.
The journal directories and the WIJ directory are not configured during installation. For information on changing them after you install Caché, see Configuring Journal Settings in the Caché Data Integrity Guide.
Current storage arrays, especially SSD/Flash-based arrays, do not always allow for the type of segregation recommended in the preceding. When using such a technology, consult and follow the storage vendor’s recommendations for performance and resiliency.
In addition, this section includes information about the following:
Storage Configuration Recommendations
Many storage technologies are available today, from traditional magnetic spinning HDD devices to SSD and PCIe Flash based devices. In addition, multiple storage access technologies include NAS, SAN, FCoE, direct-attached, PCIe, and virtual storage with hyper-converged infrastructure.
The storage technology that is best for your application depends on application access patterns. For example, for applications that predominantly involve random reads, SSD or Flash based storage would be an ideal solution, and for applications that are mostly write intensive, traditional HDD devices might be the best approach.
The sections that follow provide guidelines as general suggestions. Specific storage product providers may specify separate and even contradictory best practices that should be consulted and followed accordingly.
Storage Connectivity
The following considerations apply to storage connectivity.
Storage Area Network (SAN) Fibre Channel
Use multiple paths from each host to the SAN switches or storage controllers. The level of protection increases with multiple HBAs to protect from a single card failure, however a minimum recommendation is to use at least a dual-port HBA.
To provide resiliency at the storage array layer, an array with dual controllers in either an active-active or active-passive configuration is recommended to protect from a storage controller failure, and to provide continued access even during maintenance periods for activities such as firmware updates.
If using multiple SAN switches for redundancy, a good general practice is to make each switch a separate SAN fabric to keep errant configuration changes on a single switch from impacting both switches and impeding all storage access.
Network Attached Storage (NAS)
With 10Gb Ethernet commonly available, for best performance 10Gb switches and host network interface cards (NICs) are recommended.
Having dedicated infrastructure is also advised to isolate traffic from normal network traffic on the LAN. This will help ensure predictable NAS performance between the hosts and the storage. -
Jumbo frame support should be included to provide efficient communication between the hosts and storage.
Many network interface cards (NICs) provide TCP Offload Engine (TOE) support. TOE support is not universally considered advantageous. The overhead and gains greatly depend on the server’s CPU for available cycles (or lack thereof). Additionally, TOE support has a limited useful lifetime because system processing power rapidly catches up to the TOE performance level of a given NIC, or in many cases exceeds it.
Storage Configuration
The storage array landscape is ever-changing in technology features, functionality, and performance options, and multiple options will provide optimal performance and resiliency for Caché. The following guidelines provide general best practices for optimal Caché performance and data resiliency.
In the past, RAID10 was recommended for maximum protection and performance. However, storage controller capacities, RAID types and algorithm efficiencies, and controller features such as inline compression and deduplication provide more options than ever before. Additionally, your application’s I/O patterns will help you decide with your storage vendor which storage RAID levels and configuration provide the best solution.
Where possible, it is best to use block sizes similar to that of the file type. While most storage arrays have a lower limit on the block size that can be used for a given volume, you can approach the file type block size as closely as possible; for example, a 32KB or 64KB block size on the storage array is usually a viable option to effectively support CACHE.DAT files with 8KB block format. The goal here is to avoid excessive/wasted I/O on the storage array based on your application’s needs.
The following table is provided as a general overview of storage I/O within a Caché installation.
I/O Type |
When |
How |
Notes |
---|---|---|---|
Database reads, mostly random |
Continuous by user processes |
User process initiates disk I/O to read data |
Database reads are performed by daemons serving web pages, SQL queries, or direct user processes |
Database writes, ordered but non-contiguous |
Approx. every 80 seconds or when pending updates reach threshold percentage of database cache, whichever comes first |
Database write daemons
(8 processes) |
Database writes are performed by a set of database system processes known as write daemons. User processes update the database cache and the trigger (time or database cache percent full) commits the updates to disk using the write daemons. Typically expect anywhere from a few MBs to several GBs that must be written during the write cycle depending on update rates. |
WIJ writes, sequential |
Approx. every 80 seconds or when pending updates reach threshold percentage of database cache, whichever comes first |
Database master write daemon (1 process) | The WIJ is used to protect physical database file integrity from system failure during a database write cycle. Writes are approximately 256KB each in size. |
Journal writes, sequential |
Every 64KB of journal data or 2 seconds, or sync requested by ECP, Ensemble, or application |
Database journal daemon (1 process) |
Journal writes are sequential and variable in size from 4KB to 4MB. There can be as low as a few dozen writes per second to several thousand per second for very large deployments using ECP and separate application servers. |
Bottlenecks in storage are one of the most common problems affecting database system performance. A common error is sizing storage for data capacity only, rather than allocating a high enough number of discrete disks to support expected Input/Output Operations Per Second (IOPS).
I/O Type |
Average Response Time |
Maximum Response Time |
Notes |
---|---|---|---|
Database block size random read (non-cached) |
<=6 ms |
<=15 ms |
Database blocks are a fixed 8KB, 16KB, 32KB, or 64KB—most reads to disk will not be cached because of large database cache on the host. |
Database block size random write (cached) |
<=1 ms |
<2 ms |
All database file writes are expected to be cached by the storage controller cache memory. |
4KB to 4MB journal write (without ECP) |
<=2 ms |
<=5 ms |
Journal writes are sequential and variable in size from 4KB to 4MB. Write volume is relatively low when no ECP application servers are used. |
4KB to 4MB journal write (with ECP) |
<=1 ms |
<=2 ms |
Journal synchronization requests generated from ECP impose a stringent response time requirement to maintain scalability. The synchronization requests issue can trigger writes to the last block in the journal to ensure data durability. |
Please note that these figures are provided as guidelines, and that any given application may have higher or lower tolerances and thresholds for ideal performance. These figures and I/O profiles are to be used as a starting point for your discussions with your storage vendor.
Preparing for Caché Security
The material in this section is intended for those using Caché security features. For an overview of those features, especially the authentication and authorization options, review the “Introduction” to the Caché Security Administration Guide. This material can help you select the security level for your site, which determines the required tasks to prepare the security environment before installing Caché.
This section covers the following topics:
-
Preparing the Security Environment for Kerberos — If you are not using the Kerberos authentication method in your environment, you can bypass this section.
-
Initial Caché Security Settings — This section describes the characteristics of the different default security settings. It is particularly useful if you are choosing to use Normal or Locked Down Caché security.
If your security environment is more complex than those this document describes, contact the InterSystems Worldwide Response Center (WRC)Opens in a new tab for guidance in setting up such an environment.
Preparing the Security Environment for Kerberos
These sections describe the installation preparation for three types of environments:
-
Windows-only Environment
This configuration uses a Windows domain controller for KDC functionality with Caché servers and clients on Windows machines. A domain administrator creates domain accounts for running the Caché services on Caché servers.
See the Creating Service Accounts on a Windows Domain Controller for Windows Caché Servers section for the requirements of using Windows Caché servers. Depending on the applications in use on your system, you may also need to perform actions described in the Configuring Windows Kerberos Clients section.
-
Mixed Environment Using a Windows Domain Controller
This configuration uses a Windows domain controller with Caché servers and clients on a mix of Windows and non-Windows machines. See the following sections for the requirements for using both Windows and non-Windows Cache servers:
-
Creating Service Accounts on a Windows Domain Controller for Windows Caché Servers
-
Creating Service Accounts on a Windows Domain Controller for Non-Windows Caché Servers
-
Depending on the applications in use on your system, you may also need to perform actions described in the Configuring Windows Kerberos Clients section.
-
-
Non-Windows Environment
This configuration uses a UNIX® or Kerberos KDC with Caché servers and clients all on non-Windows machines. See the following two sections for the requirements for using a UNIX® or macOS KDC and Caché servers:
All Caché supported platforms have versions of Kerberos supplied and supported by the vendor; see the appropriate operating system documentation for details. If you choose to use Kerberos, you must have a Kerberos key distribution center (KDC) or a Windows domain controller available on your network. Microsoft Windows implements the Kerberos authentication protocol by integrating the KDC with other security services running on the domain controller.
This document refers to related, but distinct entities:
-
Service account — An entity within an operating system, such as Windows, that represents a software application or service.
-
Service principal — A Kerberos entity that represents a software application or service.
Creating Service Accounts on a Windows Domain Controller for Windows Caché Servers
Before installing Caché in a Windows domain, the Windows domain administrator must create a service account for each Caché server instance on a Windows machine using the Windows domain controller.
Account Characteristics
When you create this account on the Windows domain controller, configure it as follows:
-
Set the account's Password never expires property.
-
Make the account a member of the Administrators group on the Caché server machine.
-
Add the account to the Log on as a service policy.
If a domain-wide policy is in effect, you must add this service account to the policy for Caché to function properly.
Names and Naming Conventions
In an environment where clients and servers are exclusively on Windows, there are two choices for naming service principals:
-
Follow the standard Kerberos naming conventions. This ensures compatibility with any non-Windows systems in the future.
-
Use any unique string.
Each of these choices involves a slightly different process of configuring a connection to a server as described in the following sections.
For a name that follows Kerberos conventions, the procedure is:
-
Run the Windows setspn command, specifying the name of service principal in the form service_principal/fully_qualified_domain_name, where service_principal is typically cache and fully_qualified_domain_name is the machine name along with its domain. For example, a service principal name might be cache/cacheserver.example.com. For detailed information on the setspn tool, see the Setspn SyntaxOpens in a new tab page on the Microsoft TechNet web site.
-
In the Caché Server Manager dialog for adding a new preferred server, choose Kerberos. What you specify for the Service Principal Name field should match the principal name specified in setspn.
For detailed information on configuring remote server connections, see the “Connecting to Remote Servers” chapter of the Caché System Administration Guide.
For a name that uses any unique string, the procedure is:
-
Choose a name for the service principal.
-
In the Caché Server Manager dialog for adding a new preferred server, choose Kerberos. Specify the selected name for the service principal in the Service Principal Name field.
If you decide not to follow Kerberos conventions, a suggested naming convention for each account representing a Caché server instance is “cacheHOST”, which is the literal, cache, followed by the host computer name in uppercase. For example, if you are running a Caché server on a Windows machine called WINSRVR, name the domain account cacheWINSRVR.
For more information on configuring remote server connections, see the “Connecting to Remote Servers” chapter of the Caché System Administration Guide for the detailed procedure.
Creating Service Accounts on a Windows Domain Controller for Non-Windows Caché Servers
Before you install Caché in a Windows domain, you need to create a service account on the Windows domain controller for each Caché server on a non-Windows machine. Create one service account for each machine, regardless of the number of Caché server instances on that machine.
A suggested naming convention for these accounts is “cacheHOST,” which is the literal, cache, followed by the host computer name in uppercase. For example, if you run a Caché server on a non-Windows machine called UNIXSRVR, name the domain account cacheUNIXSRVR. For Caché servers on non-Windows platforms, this is the account that maps to the Kerberos service principal.
When you create this account on the Windows domain controller, Caché requires that you set the Password never expires property for the account.
To set up a non-Windows Caché server in the Windows domain, it must have a keytab file from the Windows domain. A keytab file is a file containing the service name for the Caché server and its key.
To accomplish this, map the Windows service account (cacheUNIXSRVR, in this example) to a service principal on the Caché server and extract the key from the account using the ktpass command-line tool on the domain controller; this is available as part of the Windows support tools from Microsoft.
The command maps the account just set up to an account on the UNIX®/Linux machine; it also generates a key for the account. The command must specify the following parameters:
Parameter | Description |
---|---|
/princ | The principal name (in the form cache/<fully qualified hostname>@<kerberos realm>). |
/mapuser | The name of the account created (in the form cache<HOST>). |
/pass | The password specified during account creation. |
/crypto | The encryption type to use (use the default unless specified otherwise). |
/out | The keytab file you generate to transfer to the Caché server machine and replace or merge with your existing keytab file. |
The principal name on UNIX®/Linux platforms must take the form shown in the table with the literal cache as the first part.
Once you have generated a key file, move it to a file on the Caché server with the key file characteristics described in the following section.
Creating Service Principals on a KDC for Non-Windows Caché Servers
In a non-Windows environment, you must create a service principal for each UNIX®/Linux or macOS Caché server that uses a UNIX®/Linux or macOS KDC. The service principal name is of the form cache/<fully qualified hostname>@<kerberos realm>.
Key File Characteristics
Once you have created this principal, extract its key to a key file on the Caché server with the following characteristics:
-
On most versions of UNIX®, the pathname is install-dir/mgr/cache.keytab. On macOS and SUSE Linux, the pathname is /etc/krb5.keytab.
-
It is owned by the user that owns the Caché installation and the group cacheusr.
-
Its permissions are 640.
Configuring Windows Kerberos Clients
If you are using Windows clients with Kerberos, you may also need to configure these so that they do not prompt the user to enter credentials. This is required if you are using a program that cannot prompt for credentials — otherwise, the program is unable to connect.
To configure Windows not to prompt for credentials, the procedure is:
-
On the Windows client machine, start the registry editor, regedit.exe.
-
Go to the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\Kerberos\Parameters key.
-
In that key, set the value of AllowTgtSessionKey to 1.
Testing Kerberos KDC Functions
When using Kerberos in a system of only non-Windows servers and clients, it is simplest to use a native UNIX®/Linux KDC rather than a Windows domain controller. Consult the vendor documentation on how to install and configure the KDC; these are usually tasks for your system administrator or system manager.
When installing Kerberos, there are two sets of software to install:
-
The KDC, which goes on the Kerberos server machine.
-
There also may be client software, which goes on all machines hosting Kerberos clients. This set of software can vary widely by operating system. Consult your operating system vendor documentation for what client software exists and how to install it.
After installing the required Kerberos software, you can perform a simple test using the kadmin, kinit, and klist commands to add a user principal to the Kerberos database, obtain a TGT (ticket-granting ticket) for this user, and list the TGT.
Once you successfully complete a test to validate that Kerberos is able to provide tickets for registered principals, you are ready to install Caché.
Initial Caché Security Settings
During installation, there is a prompt for one of three sets of initial security settings: Minimal, Normal, and Locked Down. This selection determines the initial authorization configuration settings for Caché services and security, as shown in the following sections:
If you select Normal or Locked Down for your initial security setting, you must provide additional account information to the installation procedure. If you are using Kerberos authentication, you must select Normal or Locked Down mode. See the Configuring User Accounts section for details.
If you are concerned about the visibility of data in memory images (often known as core dumps), see the section “Protecting Sensitive Data in Memory Images” in the “System Management and Security” chapter of the Caché Security Administration Guide.
Initial User Security Settings
The following tables show the user password requirements and settings for predefined users based on which security level you choose.
Security Setting | Minimal | Normal | Locked Down |
---|---|---|---|
Password Pattern | 3.32ANP | 3.32ANP | 8.32ANP |
Inactive Limit | 0 | 90 days | 90 days |
Enable _SYSTEM User | Yes | Yes | No |
Roles assigned to UnknownUser | %All | None | None |
You can maintain both the password pattern and inactive limit values from the System, Security Management, System Security Settings, System-wide Security Parameters page of the System Management Portal. See the System-wide Security Parameters section of the “System Management and Security” chapter of the Caché Security Administration Guide for more information.
After installation, you can view and maintain the user settings at the System, Security Management, Users page of the System Management Portal.
Password Pattern
When Caché is installed, it has a default set of password requirements. For locked-down installations, the initial requirement is that a password be from 8 to 32 characters, and can consist of alphanumeric characters or punctuation; the abbreviation for this is 8.32ANP. Otherwise, the initial requirement is that the password be from 3 to 32 characters, and can consist of alphanumeric characters or punctuation (3.32ANP).
Inactive Limit
This value is the number of days an account can be inactive before it is disabled. For minimal installations, the limit is set to 0 indicating that accounts are not disabled, no matter how long they are inactive. Normal and locked-down installations have the default limit of 90 days.
Enable _SYSTEM User
In versions of Caché prior to 5.1, all installed systems included an SQL System Manager user named _SYSTEM with a password of SYS. This Caché version creates the _SYSTEM and the following additional predefined users, using the password you provide during the installation: _SYSTEM, Admin, SuperUser, CSPSystem, and the instance owner (the installing user on Windows and the username specified by the installer on other platforms).
For more details on these predefined users, see the Predefined User Accounts section of the “Users” chapter of the Caché Security Administration Guide.
Roles Assigned to UnknownUser
When an unauthenticated user connects, Caché assigns a special name, UnknownUser, to $USERNAME and assigns the roles defined for that user to $ROLES. The UnknownUser is assigned the %All role with a Minimal-security installation; UnknownUser has no roles when choosing a security level other than Minimal.
For more details on the use of $USERNAME and $ROLES, see the “Users” and “Roles” chapters of the Caché Security Administration Guide.
Initial Service Properties
Services are the primary means by which users and computers connect to Caché. For detailed information about the Caché services see the “Services” chapter of the Caché Security Administration Guide.
Service Property | Minimal | Normal | Locked Down |
---|---|---|---|
Use Permission is Public | Yes | Yes | No |
Requires Authentication | No | Yes | Yes |
Enabled Services | Most | Some | Fewest |
If the Use permission on a service resource is Public, any user can employ the service; otherwise, only privileged users can employ the service.
For installations with initial settings of locked down or normal, all services require authentication of some kind (Caché login, operating-system–based, or Kerberos). Otherwise, unauthenticated connections are permitted.
The initial security settings of an installation determine which of certain services are enabled or disabled when Caché first starts. The following table shows these initial settings:
Service | Minimal | Normal | Locked Down |
---|---|---|---|
%Service_Bindings | Enabled | Enabled | Disabled |
%Service_CSP | Enabled | Enabled | Enabled |
%Service_CacheDirect | Enabled | Disabled | Disabled |
%Service_CallIn | Enabled | Disabled | Disabled |
%Service_ComPort | Disabled | Disabled | Disabled |
%Service_Console* | Enabled | Enabled | Enabled |
%Service_ECP | Disabled | Disabled | Disabled |
%Service_MSMActivate | Disabled | Disabled | Disabled |
%Service_Monitor | Disabled | Disabled | Disabled |
%Service_Shadow | Disabled | Disabled | Disabled |
%Service_Telnet* | Disabled | Disabled | Disabled |
%Service_Terminal† | Enabled | Enabled | Enabled |
%Service_WebLink | Disabled | Disabled | Disabled |
* Service exists on Windows servers only
† Service exists on non-Windows servers only
After installation, you can view and maintain these services at the System, Security Management, Services page of the System Management Portal.
Configuring User Accounts
If you select Normal or Locked Down for your initial security setting, you must provide additional information to the installation procedure:
-
User Credentials for Windows server installations only — Choose an existing Windows user account under which to run the Caché service. You can choose the default system account, which runs Caché as the Windows Local System account, or enter a defined Windows user account.
Important:If you are using Kerberos, you must enter a defined account that you have set up to run the Caché service. InterSystems recommends you use a separate account specifically set up for this purpose as described in the Creating Service Principals for Windows Caché Servers section.
If you enter a defined user account, the installation verifies the following :
-
The account exists on the domain.
-
You have supplied the correct password.
-
The account has local administrative privileges on the server machine.
-
-
Caché Users Configuration for Windows installations — The installation creates a Caché account with the %All role for the user that is installing Caché to grant that user access to services necessary to administer Caché.
Owner of the instance for non-Windows installations — Enter a username under which to run Caché. Caché creates an account for this user with the %All role.
Enter and confirm the password for this account. The password must meet the criteria described in the Initial User Security Settings table.
Setup creates the following Caché accounts for you:_SYSTEM, Admin, SuperUser, CSPSystem, and the instance owner (installing user on Windows or specified user on other platforms) using the password you provide.
If you select Minimal for your initial security setting on a Windows installation, but Caché requires network access to shared drives and printers, you must manually change the Windows user account under which to run the Caché service. Choose an existing or create a new account that has local administrative privileges on the server machine.
The instructions in the platform-specific chapters of this book provide details about installing Caché. After reading the Caché Security Administration Guide introduction and following the procedures in this section, you are prepared to provide the pertinent security information to these installation procedures.
Preparing to Install Caché on UNIX®, Linux, and macOS
Read the following sections for information that applies to your platform:
Supported File Systems on UNIX®, Linux, and macOS Platforms
A complete list of file systems supported on UNIX®/Linux platforms, see “Supported File Systems” in the “Supported Technologies” chapter of the online InterSystems Supported PlatformsOpens in a new tab document for this release.
File System Mount Options on UNIX®, Linux, and macOS Platforms
This section describes the following mount options::
Buffered I/O vs. Direct I/O
In general, most of the supported UNIX®, Linux, and macOS file systems and operating systems offer two distinct I/O options, using either program control, a mount option, or both:
-
Buffered I/O, in which the operating system caches reads and writes, is the default.
-
Direct I/O is an option in which reads and writes bypass the operating system cache. Some platforms further distinguish an optimized form of direct I/O, called concurrent I/O which, if offered, is preferred.
The use of buffered and direct I/O in Caché varies by platform, file system, and the nature of the files that are stored on the file system, as follows:
-
Journal files
Some platforms have specific recommendations to use direct or concurrent I/O mount options for optimal performance, as documented in “Supported File Systems” in the online InterSystems Supported PlatformsOpens in a new tab document for this release. On other platforms, Caché uses direct I/O automatically for journal files as appropriate and no special consideration is required.
-
Installation files, executables, and system databases
This file system should be mounted to use buffered I/O (the default option, and on some platforms the only option).
-
Databases (CACHE.DAT files)
The use of direct I/O (or concurrent I/O) varies in order to optimize I/O characteristics for database files on each platform, as detailed in the following. In all cases, Caché uses its own database cache, so buffering at the operating system level is not advantageous for database files. You must ensure that sufficient database cache is configured; this is particularly true on platforms on which Caché utilizes direct I/O, since operating system buffering cannot make up for an insufficient database cache.
-
IBM AIX
Caché uses concurrent I/O for database files regardless of whether the cio file system mount option is used.
Note:On AIX, in unusual configurations in which an external command is used to read a database file while Caché is running, the external command may fail because the file is opened for concurrent I/O by Caché. An example is performing an external backup using the cp command instead of a more sophisticated backup or snapshot utility. Mounting the file system with the cio option resolves this by forcing all programs to open files with concurrent I/O.
-
HP HP-UX
Caché uses buffered I/O for database files because HP-UX does not provide program-level control of concurrent I/O. Concurrent I/O is recommended and can be enabled by mounting the OnlineJFS or VxFS filesystems with the cio mount option.
-
Oracle Solaris
Caché uses direct I/O automatically for database files located on UFS filesystems; this applies to NFS as well, although NFS is not a recommended filesystem on Solaris. Direct I/O is not relevant to the ZFS filesystem.
-
Linux
Caché uses buffered I/O for database files. If using the VxFS file system, this can be overridden by mounting the file system for concurrent I/O with the cio mount option.
-
macOS
Caché uses buffered I/O for database files.
-
-
External application files and streams
Applications that use external files typically benefit from those files being located on a buffered file system.
noatime Mount Option
Generally, it is advisable to disable updates to the file access time when this option is available. This can typically be done using the noatime mount option on various file systems.
Calculating System Parameters for UNIX®, Linux, and macOS
This section explains how you can calculate the best parameters for your system in these sections:
-
Determining Memory and Disk Requirements — calculate memory requirements, swap space, disk requirements, maximum buffers, maximum users, and maximum database size.
-
Configuring UNIX® Kernel Parameters — set values for tunable UNIX® parameters and other platform-specific memory management issues.
-
Platform Configuration Issues — configuration issues for individual UNIX®/Linux platform-specific issues
For optimal Caché performance, you need to calculate proper values for certain Caché system parameters. These values allow you to determine whether you need to adjust certain system level parameters. The values you choose should minimize swapping and paging that require disk accesses, and thus improve system performance.
Review this section carefully and calculate the proper values for both your operating system and Caché before proceeding. Use the tables provided here to record the current and calculated values for your system level parameters. You can then refer to these tables when you install Caché. After your system is running, you may need to adjust these values to gain optimal performance.
If you are not already familiar with the memory organization at your operating system level, consult the appropriate system documentation.
Determining Memory and Disk Requirements
This section outlines the basic memory and disk requirements for most systems. Because these requirements vary by platform, consult your platform documentation for additional information. The specific requirements include the following:
See the section Managing Caché Memory for information on the two primary ways that you can manage memory in Caché.
Calculating Memory Requirements
Use the breakdown of memory usage shown in the following table to calculate the memory your system needs for Caché.
Components | Memory Requirements |
---|---|
Operating system | 1800 KB (operating system dependent) |
Caché | 842 KB |
Global database cache | 8 KB per buffer |
Routine cache | 32 KB per routine buffer |
User overhead | 1024 KB per process |
Network (if present) | 300 KB per port for each network system process (DMNNET and RECEIVE). Caché ports have two DMNNET system processes per port. In addition, there is a network shared memory requirement, which depends on the number of ports and the number of remote hosts configured. For a basic system, this requirement is about 304 KB. |
By default, the system automatically allocates shared memory, including routine buffers and global buffers, to a total of one-eighth of the system-available shared memory space. If you plan to run large applications or support large numbers of users, tune the system according to the following formula:
(number of routine buffers)*32 KB + (number of global buffers)*(block size) + 4 MB ___________________________________ = Shared memory needed
For applications where load growth is reflected in the number of simultaneous direct Caché sessions, the memory demand to accommodate the processes increases as the computing power increases. For example, a system that is upgraded from 4 to 8 cores would be capable of supporting a much larger number of sessions (that is, processes). Since each process consumes memory, it might be necessary to increase physical memory.
The amount of memory per process may vary depending on the application and can be larger than the default value recommended in the UNIX® Memory Requirements table.
For configurations dedicated to servers with a limited number of processes (for example, ECP Data Server or Ensemble), an increase in the load does not necessarily involve a greater number of processes. Therefore, a larger load on a more powerful system may not require more memory for processes.
The default memory page size on Linux systems is 4 KB. Most current Linux distributions include an option for Huge Pages, that is, a memory page size of 2 MB or 1 GB depending on system configuration. Use of Huge Pages saves memory by saving space in page tables. When Huge Pages are configured, the system automatically uses them in memory allocation. InterSystems recommends the use of Huge Pages on systems hosting Caché under most circumstances.
With the 2.6.38 kernel, some Linux distributions have introduced Transparent Huge Pages (THP) to automate the creation, management, and use of HugePages. However, THP does not handle the shared memory segments that make up the majority of Caché’s memory allocated, and can cause memory allocation delays at runtime that may affect performance, especially for applications that have a high rate of job or process creation. For these reasons, InterSystems recommends that THP be disabled on all systems hosting Caché. For more detailed information on this topic, see Linux Transparent Huge Pages and the impact to CachéOpens in a new tab on InterSystems Developer Community.
To configure Huge Pages on Linux, do the following:
-
Check the status.
/proc/meminfo contains Huge Pages information. By default, no Huge Pages are allocated. Default Huge Page size is 2 MB. For example:
HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 KB
-
Change the number of Huge Pages.
You can change the system parameter directly: For example, to allocate 2056 Huge Pages, execute:
# echo 2056 > /proc/sys/vm/nr_hugepages
Note:Alternatively, you can use sysctl(8) to change it:
# sysctl -w vm.nr_hugepages=2056
Huge pages must be allocated contiguously, which may require a reboot. Therefore, to guarantee the allocation, as well as to make the change permanent, do the following:
-
Enter a line in /etc/sysctl.conf file:
echo "vm.nr_hugepages=2056" >> /etc/sysctl.conf
-
Reboot the system.
-
Verify meminfo after reboot; for example:
[root woodcrest grub]# tail -4 /proc/meminfo HugePages_Total: 2056 HugePages_Free: 2056 HugePages_Rsvd: 0 Hugepagesize: 2048 KB
-
-
Verify the use of Huge Pages by Caché.
When Caché is started, it reports how much shared memory was allocated; for example, a message similar to the following is displayed (and included in the cconsole.log file):
Allocated 3580MB shared memory: 3000MB global buffers, 226MB routine buffers
The amount of memory available in Huge Pages should be greater than the total amount of shared memory to be allocated; if it is not greater, Huge Pages are not used.
Note:To prevent Caché from starting if Huge Pages are configured but cannot be allocated, set the memlock parameter to 7 (see memlock in the Caché Parameter File Reference).
Huge Pages are allocated from physical memory. Only applications and processes using Huge Pages can access this memory.
Physical memory not allocated for Huge Pages is the only memory available to all other applications and processes.
It is not advisable to specify HugePages_Total much higher than the shared memory amount because the unused memory will not be available to other components.
If Caché fails to allocate Huge Pages on start-up and switches to standard pages, Caché will be allocating shared memory from the same memory pool as all other jobs.
When Caché is configured to lock the shared memory segment in memory to prevent paging, Huge Pages can provide the required increase in the maximum size that may be locked into memory, as described in the Locked-in Memory section of the Red Hat Linux Platform Notes in this chapter.
AIX® supports multiple page sizes: 4 KB, 64 KB, 16 MB, and 16 GB. Use of 4 KB and 64 KB pages is transparent to Caché. In order for Caché to use 16 MB large pages, you must configure them within AIX®. AIX® does not automatically change the number of configured large or huge pages based on demand. Currently, Caché does not use 16 GB huge pages.
Large pages should be configured only in high-performance environments because memory allocated to large pages can be used only for large pages.
To allocate large pages, users must have the CAP_BYPASS_RAC_VMM and CAP_PROPAGATE capabilities or have root authority unless memlock=64.
By default, when large pages are configured, the system automatically uses them in memory allocation. If shared memory cannot be allocated in large pages then it is allocated in standard (small) pages. For finer grain control over large pages, see memlock in the Caché Parameter File Reference.
Configure large pages using the vmo command as follows:
vmo -r -o lgpg_regions=<LargePages> -o lgpg_size=<LargePageSize>
where <LargePages> specifies the number of large pages to reserve, and <LargePageSize> specifies the size, in bytes, of the hardware-supported large pages.
On systems that support dynamic Logical PARtitioning (LPAR), you can omit the -r option to dynamically configure large pages without a system reboot.
For example, the following command configures 1 GB of large pages:
# vmo -r -o lgpg_regions=64 -o lgpg_size=16777216
Once you have configured large pages, run the bosboot command to save the configuration in the boot image. After the system comes up, enable it for pinned memory using the following vmo command:
vmo -o v_pinshm=1
However, if memlock=64, vmo -o v_pinshm=1 is not required. For more information on memlock, see memlock in the Caché Parameter File Reference.
Calculating Swap Space
The amount of swap space available on your system should never be less than the amount of real memory plus 256 KB.
With this minimum in mind, InterSystems recommends the following value as the minimum amount of swap space needed for Caché:
((# of processes + 4)† * (1024 KB))‡ + total global buffer space + total routine buffer space _____________________________________ = Minimum swap space
† Add 4 to the # of processes for the Caché Control Process, the Write daemon, the Garbage Collector, and the Journal daemon. Also add 1 for each slave Write daemon. The # of processes must include all user and jobbed processes which might run concurrently. If you are running networking, add 1 for the RECEIVE system process plus the number of DMNNET daemons you have running (2 per port).
‡ The 1024 KB number is approximate. It is based on the current size of the Caché executable and grows with the partition size you allocate to each Caché process. On most systems, provide only as much swap space as necessary. However, some systems require you to provide swap space for the worst case. Under these conditions, you need to increase this number to as high as 1.5 MB, depending on the partition size you specify.
Be sure to confirm that your UNIX® system permits the amount of swap space you require. For specific information about swap space on your system, consult your UNIX® operating system manual.
To calculate swap space for the Solaris platform:
swap –l
Example:
>swap –l
swapfile dev swaplo blocks free
/dev/dsk/c0t2d0s0 136,0 16 526304 526304
/dev/dsk/c0t2d0s1 136,1 16 2101184 2101184
To display swap space for AIX®:
lsps –a
Page Space Physical Volume Volume Group Size %Used
Active Auto Type
hd6 hdisk2 rootvg 512 MB 72
yes yes lv
To display swap space for HP-UX:
swapinfo (3M)
# /usr/sbin/swapinfo
KB KB KB PCT START/ KB
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 524288 138260 386028 26% 0 - 1 /dev/vg00/lvol2
reserve - 78472 -78472
memory 195132 191668 3464 98%
Calculating Disk Requirements
In addition to the swap space you just calculated, you need disk space for the following items:
-
67 MB for Caché.
-
3 MB for the Caché Server Pages (CSP).
-
3.5 MB for Caché ODBC support.
-
2.5 MB for the Caché manager sources.
-
6.6 MB for the Caché engine link libraries.
-
Space for your Caché application database.
-
Approximately 12.5% of the buffer pool size for the initial size of the write image journal file. If your disk does not have enough space for the write image journal file, when you start Caché it displays a message indicating that the system did not start.
-
Desired space for journal files.
Although you do not need to remove any installation files after completing the installation procedure, you can do so if you are short on disk space. The installation program tells you how much space can be saved, and asks if you want to delete the installation files.
Determining Number of Global Buffers
Caché supports the following maximum values for the number of global buffers:
-
For 32-bit platforms, any 8-KB buffers that are:
-
Less than 1GB for HP-UX
-
Less than 2GB for other 32-bit platforms
The 2-GB value is the total address space the operation system allocates for the process data, which includes not only shared memory, but other Caché and operating system data as well. Therefore, it represents an upper limit that is not achievable in practice.
-
-
For 64-bit platforms:
The number of global buffers is limited only by the operating system and the available memory.
Set your values to less than the maximum number of buffers.
For more information, see globals in the “config” section of the Caché Parameter File Reference and Memory and Startup Settings in the “Configuring Caché” chapter of the Caché System Administration Guide.
Determining Number of Routine Buffers
Caché supports the following maximum value for the number of routine buffers:
65,535
Set your values to less than this maximum number of buffers.
For more information, see routines in the “config” section of the Caché Parameter File Reference and Memory and Startup Settings in the “Configuring Caché” chapter of the Caché System Administration Guide.
Determining Maximum Number of Users
The maximum users allowed by Caché is the lowest of the following values:
-
License limit
-
# of semaphores - 4
For more information, see Determining License Capacity and Usage in the “Managing Caché Licensing” chapter of the Caché System Administration Guide.
Determining Maximum Database Size
The ulimit parameter in UNIX® determines the maximum file size available to a process. For the Caché Manager group, the value of ulimit should either be unlimited or as large as the largest database you may have.
For more information, see Configuring Databases in the “Configuring Caché” chapter of the Caché System Administration Guide.
Configuring UNIX® Kernel Parameters
The following sections describe issues related to tuning and performance on various UNIX® platforms:
Setting Values for Tunable UNIX® Parameters
Caché uses a configurable number of semaphores, in sets whose size you define. The parameters SEMMNI, SEMMNS, and SEMMSL reflect the number of semaphores per set and the total number of semaphores Caché uses. The UNIX®/Linux parameters that govern shared memory allocation are SHMMAX, SHMMNI, SHMSEG, and SHMALL. Caché uses shared memory and allocates one segment of shared memory; the size of this segment depends on the area set aside for global buffers and routine buffers. It uses the following formula to determine the segment's minimum size:
space required for routine buffers + space required for global buffers + 4 MB _____________________________________ = Shared memory segment size
If you are distributing your data across multiple computers, Caché allocates a second segment; by default, there is no memory allocated for the second segment. (If you plan to use distributed data, contact your vendor or InterSystems support for configuration guidelines.) You can alter NBUF and NHBUF according to other system requirements. Because Caché does all its own disk buffering, you should keep NBUF and NHBUF small. The following table lists the most common names of the UNIX® parameters that you may need to change, the minimum value InterSystems recommends for each parameter, and a brief description of each. Verify that your parameter values are set to at least the minimum value. Certain parameters may not be implemented on all platforms or may be referred to differently. Refer to platform-specific tuning notes for more information.
Kernel Parameter | Recommended Minimum Value | Definition |
---|---|---|
CDLIMIT | Number of bytes in largest virtual volume | Maximum size of a file. |
MSGMAX | 2 KB | Maximum message size, in bytes. |
MSGMNI | Number of Caché instances x 3; each Caché instance uses three message queues | Maximum number of uniquely identifiable message queues that may exist simultaneously. |
NOFILES | 35 | Number of open files per process. |
SEMMNI | Product of SEMMNI and SEMMSL must be greater than the # of user processes + 4 | Number of semaphore identifiers in the kernel; this is the number of unique semaphore sets that can be active at any one time. |
SEMMNS | 128 or ... | Total number of semaphores in the system. User processes include jobbed processes and all other semaphores required by other software. |
Number of processes expected to run. If the process table might expand, use a larger number to provide for expansion. | ||
SEMMSL | See SEMMNI | Maximum number of semaphores per identifier list. |
SHMALL | 60 KB or ... | Maximum total shared memory system-wide. Units should be in KB. 1000 represents the MCOMMON shared region. |
1000 + total global buffer space+ total routine buffer space * | ||
SHMMNI | 3 | Maximum number of shared memory identifiers system-wide. |
SHMSEG | 3 | Number of attached shared memory segments per process. |
SHMMAX | 60 KB or ... | Maximum shared memory segment size in KB. |
1000 + total global buffer space+ total routine buffer space |
* This is the minimum value for SHMALL required for Caché UNIX®. You must also take into account any other applications that use shared memory. If you are unsure of other shared memory use, calculate SHMALL as SHMSEG multiplied by SHMMAX, in pages; this larger value suffices in all cases.
Enough swap space must be created to support the memory allocated, unless the operating system documentation explicitly states otherwise. On certain operating systems (Solaris, for example) Caché creates locked shared memory segments, which are not pageable but still may need swap space.
Adjusting Maximum File Size
The hard limit for the maximum file size (RLIMIT_FSIZE) on any system running Caché must be unlimited. Set the value to unlimited on the operating system before installing. Make sure that the limit is set to unlimited for both the root user and the user who will run Caché. Caché also sets the process soft limit to RLIMIT_FSIZE in its daemons to prevent I/O errors.
Caché will not install or start up if RLIMIT_FSIZE is not set to unlimited.
See the operating system documentation for your platform for instructions on how to set the system hard limit for the maximum file size, RLIMIT_FSIZE.
Platform Configuration Issues
The following sections contain configuration issues for individual UNIX®/Linux platforms. For more information, consult the system documentation for your platform.
HP-UX Platform Notes
This topic includes the information on the following adjustments:
Use the HP System V IPC Shared-Memory Subsystem to update parameters. See the HP System V Inter-Process Communication MechanismsOpens in a new tab online documentation page for additional information. To change a value, perform the following steps:
-
Enter the /usr/sbin/sam command to start the System Administration Manager (SAM) program.
-
Double-click the Kernel Configuration icon.
-
Double-click the Configurable Parameters icon.
-
Double-click the parameter you want to change and enter the new value in the Formula/Value field.
-
Click OK.
-
Repeat these steps for all of the kernel configuration parameters that you want to change.
-
When you are finished setting all of the kernel configuration parameters, select Process New Kernel from the Action menu.
The HP-UX operating system automatically reboots after you change the values for the kernel configuration parameters.
HP-UX Release 11i Parameters
HP-UX release 11i does not implement the CDLIMIT and NOFILES parameters. However, you can tune the values of the ulimit and maxfiles parameters instead.
If you tune maxfiles and maxfiles_lim, ensure that the values you set reflect the actual needs of your Caché system. Caché closes all possible open file descriptors when starting a new process via the Job command; setting a high value for these parameters may cause unnecessary close operations which can impact job start performance.
HP-UX Key Kernel Tunable Parameters
HP recommends that only parameters that are different from the OS default value be included in the /stand/system file. HP further recommends that changes to the /stand/system file be made only using the kctune command and that parameters not explicitly mentioned here be explicitly set to default values, as follows:
-
Revert to default:
kctune [parameter]=
-
Set parameter to expression:
kctune [parameter]="[expression]"
When the version of the release to which a parameter applies is not explicitly mentioned, the recommendation is valid for all versions of HP-UX 11i v2 and 11i v3.
Parameter | Value | Notes |
---|---|---|
dbc_max_pct 1 | 1–2GB | Percentage of physical memory |
dbc_min_pct 1 | <dbc_max_pct> | Same as dbc_max_pct |
fcache_fb_policy 2 | 1 | Flush behind policy |
hires_timeout_enable | 1 | Requires hi-res timer patches |
lcpu_attr | 0 | Disable hyperthreading |
nkthread | 30100 | Threads allowed to run simultaneously (nproc+100) |
nproc | 30000 |
30000=maximum for 11i v2 60000=maximum for 11i v3 |
maxuprc | 28000 | Maximum procs/user |
nclist | 8292 | Number of cblocks for pty and tty – min 8292 |
nfile 3 | 5600066 | System-wide open files |
nstrpty | 600 | Maximum streams-based ptys |
nstrtel | 600 | Maximum telnet device files |
o_sync_is_o_dsync 3 | 1 | Enables translation of O_SYNC to O_DSYNC |
process_id_max 2 | 31000 | Maximum PID number |
scsi_max_qdepth 3 | ||
Set appropriate value for storage subsystem:
|
||
semmsl | 128 | Semaphores/ID |
semmni | 512 | System-wide semaphore IDs: set to 50 non-Cache semaphore Ids |
semmns (semmsl * semmni) | 65536 | System wide semaphores > Caché # processes |
shmmax | ½ phys mem | Shared memory |
swchunk | 16384 | Swap chunk size |
vps_ceiling | 512 | Maximum system-selectable page size |
The following notes apply to the “HP-UX Key Kernel Tunable Parameters” table:
1 This parameter is for 11i v2 only.
2 This parameter is for 11i v3 only.
3 This parameter is obsolete in 11i v3.
HP-UX Network Parameters
The following parameters should be inserted in /etc/rc.config.d/nddconf to support gigabit ethernet:
TRANSPORT_NAME[0]=tcp
NDD_NAME[0]=tcp_recv_hiwater_def
NDD_VALUE[0]=262144
TRANSPORT_NAME[1]=tcp
NDD_NAME[1]=tcp_xmit_hiwater_def
NDD_VALUE[1]=262144
TRANSPORT_NAME[2]=tcp
NDD_NAME[2]=tcp_recv_hiwater_lfp
NDD_VALUE[2]=262144
TRANSPORT_NAME[3]=tcp
NDD_NAME[3]=tcp_xmit_hiwater_lfp
NDD_VALUE[3]=262144
TRANSPORT_NAME[4]=sockets
NDD_NAME[4]=socket_udp_rcvbuf_default
NDD_VALUE[4]=262144
TRANSPORT_NAME[5]=sockets
NDD_NAME[5]=socket_udp_sndbuf_default
NDD_VALUE[5]=65535
Hyper-threading (HT) Technology
InterSystems recommends enabling hyper-threading on all Poulson (Itanium 9500 series)-based or Intel Xeon-based processors. InterSystems recommends disabling hyper-threading for older Itanium processors. Please consult the InterSystems Worldwide Response Center (WRC)Opens in a new tab if you have questions about your specific server and platform.
AIX® Platform Notes
The default settings of several AIX® parameters can adversely affect performance. The settings and recommendations are detailed for the following:
I/O Pacing Parameters
AIX® implements an I/O pacing algorithm that may hinder Caché write daemons. In AIX® 5.2 and AIX® 5.3, I/O pacing is automatically enabled when using HACMP clustering; beginning in AIX® 6.1, however, I/O pacing is enabled on all systems and the default high-water mark is set higher than in earlier releases.
If write daemons are slowing or stalling, you may have to adjust the high-water mark; for information, see the “Using Disk-I/O Pacing” section of the AIX® Performance Management Guide at the following IBM web page:http://publib.boulder.ibm.com/infocenter/systems/scope/aix/topic/com.ibm.aix.prftungd/doc/prftungd/disk_io_pacing.htmOpens in a new tab.
Beginning in AIX® 6.1, you should not have to make any high-water mark adjustments.
If you have questions about the impact to your system, however, contact the InterSystems Worldwide Response Center (WRC)Opens in a new tab or your AIX® supplier before making any changes. These recommendations are independent of Caché versions and apply to both JFS and Enhanced JFS (JFS2) file systems.
File System Mount Option
Although different mount options may improve performance for some workloads, InterSystems recommends the concurrent I/O (cio) mount option for file systems that contain only CACHE.DAT files.
Non-Caché workloads that benefit from file system caching (for example, operating system-level backups and/or file copies) are slowed by the cio mount option.
For JFS2 file systems that contain only journal files, cio is strongly recommended. For information, see UNIX® File System Recommendations in the “Journaling” chapter of the Caché Data Integrity Guide.
To improve recovery speed using the CACHE.WIJ file after a hard shutdown or system crash, InterSystems recommends a mount option that includes file system buffering (for example, rw) for the file system that contains the CACHE.WIJ file.
For information about mount options, see the AIX® Commands Reference at the following IBM web page: http://publib.boulder.ibm.com/infocenter/systems/scope/aix/topic/com.ibm.aix.cmds/doc/aixcmds3/mount.htmOpens in a new tab.
Memory Management Parameters
The number of file systems and the amount of activity on them can limit the number of memory structures available to JFS or JFS2, and delay I/O operations waiting for those memory structures.
To monitor these metrics, issue a vmstat -vs command, wait two minutes, and issue another vmstat -vs command. The output looks similar to the following:
# vmstat -vs
1310720 memory pages
1217707 lruable pages
144217 free pages
1 memory pools
106158 pinned pages
80.0 maxpin percentage
20.0 minperm percentage
80.0 maxperm percentage
62.8 numperm percentage
764830 file pages
0.0 compressed percentage
0 compressed pages
32.1 numclient percentage
80.0 maxclient percentage
392036 client pages
0 remote pageouts scheduled
0 pending disk I/Os blocked with no pbuf
5060 paging space I/Os blocked with no psbuf
5512714 filesystem I/Os blocked with no fsbuf
194775 client filesystem I/Os blocked with no fsbuf
0 external pager filesystem I/Os blocked with no fsbuf
If you see an increase in the following parameters, increase the values for better Caché performance:
-
pending disk I/Os blocked with no pbuf
-
paging space I/Os blocked with no psbuf
-
filesystem I/Os blocked with no fsbuf
-
client filesystem I/Os blocked with no fsbuf
-
external pager filesystem I/Os blocked with no fsbuf
When increasing these parameters from the default values:
-
Increase the current value by 50%.
-
Check the vmstat output.
-
Run vmstat twice, two minutes apart.
-
If the field is still increasing, increase again by the same amount; continue this step until the field stops increasing between vmstat reports.
Change both the current and the reboot values, and check the vmstat output regularly because I/O patterns may change over time (hours, days, or weeks).
See the following IBM web pages for more detailed information:
-
For a complete description of each of the fields reported by vmstat, see the vmstat Command page of AIX® Commands Reference, Volume 6, v - z at:
-
For instructions on how to increase these parameters, see the VMM page replacement tuning section of the AIX® Performance Management Guide at:
-
For a complete description of managing I/O tunable parameters, see the ioo Command page of AIX® Commands Reference, Volume 3, i - m at:
AIX® Tunable Parameters
None of the following listed parameters requires tuning because each is dynamically adjusted as needed by the kernel. See the appropriate AIX® operating system documentationOpens in a new tab for more information.
The following table lists the tunable parameters for the IBM pSeries AIX® 5.2 operating system.
Parameter | Purpose | Dynamic Values |
---|---|---|
msgmax | Specifies maximum message size. | Maximum value of 4 MB |
msgmnb | Specifies maximum number of bytes on queue. | Maximum value of 4 MB |
msgmni | Specifies maximum number of message queue IDs. | Maximum value of 4096 |
msgmnm | Specifies maximum number of messages per queue. | Maximum value of 524288 |
semaem | Specifies maximum value for adjustment on exit. | Maximum value of 16384 |
semmni | Specifies maximum number of semaphore IDs. | Maximum value of 4096 |
semmsl | Specifies maximum number of semaphores per ID. | Maximum value of 65535 |
semopm | Specifies maximum number of operations per semop() call. | Maximum value of 1024 |
semume | Specifies maximum number of undo entries per process. | Maximum value of 1024 |
semvmx | Specifies maximum value of a semaphore. | Maximum value of 32767 |
shmmax | Specifies maximum shared memory segment size. | Maximum value of 256 MB for 32-bit processes and 0x80000000u for 64-bit |
shmmin | Specifies minimum shared-memory-segment size. | Minimum value of 1 |
shmmni | Specifies maximum number of shared memory IDs. | Maximum value of 4096 |
maxuproc, which specifies the maximum number of processes than can be started by a single nonroot user, is a tunable parameter that can be adjusted as described in this subsection.
If this parameter is set too low then various components of the operating system can fail as more and more users attempt to start processes; these failures include loss of CSP pages, background tasks failing, etc. Therefore, you should set the maxuproc parameter to be higher than the maximum number of processes that might be started by a nonroot user (including interactive users, web server processes, and anything that might start a process).
Do not set the value excessively high because this value protects a server from a runaway application that is creating new processes unnecessarily; however, setting it too low causes unexplained problems.
Intersystems suggests that you set maxuproc to be double your expected maximum process count which gives a margin of error but still provides protection from runaway processes. For example, if your system has 1000 interactive users and often runs 500 background processes, then a value of at least 3000 would be a good choice.
The maxuproc value can be examined and changed either from the command line or from the smit/smitty administrator utilities, both as root user, as follows:
-
From the command line, view the current setting:
# lsattr -E -l sys0 -a maxuproc
then modify the value:
# chdev -l sys0 -a maxuproc=NNNNNN
where NNNNNN is the new value.
-
From the administrator utility smit (or smitty) choose System Environments > Change / Show Characteristics of Operating System > Maximum number of PROCESSES allowed per user.
If you increase the value of maxuproc, the change is effective immediately. If you decrease the value of maxuproc, the change does not take effect until the next system reboot. In both cases the change persists over system reboots.
Red Hat Linux Platform Notes
This topic includes the information on the following adjustments:
Locked-in Memory
On Linux platforms, you can configure Caché to lock the shared memory segment in memory to prevent paging as described in the memlock entry of the Caché Parameter File Reference. If shared memory is allocated in Huge Pages, they are automatically locked in memory and no further action is required. Otherwise, you must increase the maximum size that may be locked into memory. The default value is 32 KB. View the current value using the ulimit command.
For example, to display all current limits:
bash$ ulimit -a
core file size (blocks, -c) unlimited
data seg size ( KBytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 1024
max locked memory (KBytes, -l) 32 <---------- THIS ONE
max memory size (KBytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size ( KBytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 49000
virtual memory ( KBytes, -v) unlimited
file locks (-x) unlimited
To display only max-locked memory, use the -l option:
bash$ ulimit -l
32
If you have privileges, you can alter the value directly using the ulimit command; however, it is better to update the memlock parameter in the /etc/security/limits.conf file. If the memlock limit is too low, Linux reports a ENOMEM - "Not enough memory" error, which does not make the cause obvious. The actual memory is allocated; it is the lock that fails.
For more information, see memlock in the Caché Parameter File Reference.
You can achieve the same effect by using Linux Huge Pages for Caché shared memory. See the section “Support for Huge Memory Pages for Linux” in the section Calculating Memory Requirements in this chapter for more information.
Adjusting for Large Number of Concurrent Processes
Make the following adjustments if you are running a system that requires a large number of processes or telnet logins.
-
In the /etc/xinetd.d/telnet file, add the following line:
instances = unlimited
-
In the /etc/xinetd.conf file, add or change the instances setting to:
instances = unlimited
-
After you make these modifications, restart the xinetd services with:
# service xinetd restart
-
The default pty (pseudo terminal connection) limit is 4096. If this is not sufficient, add or change the maximum pty line in the /etc/sysctl.conf file. For example:
kernel.pty.max=10000
Dirty Page Cleanup
On large memory systems (for example, 8GB or larger), when doing numerous flat-file writes (for example, Caché backups or file copies), you can improve performance by adjusting the following parameters, which are located in proc/sys/vm/:
-
dirty_background_ratio — Maximum percentage of active that can be filled with dirty pages before pdflush begins to write them. InterSystems recommends setting this parameter to 5.
-
dirty_ratio — Maximum percentage of total memory that can be filled with dirty pages before processes are forced to write dirty buffers themselves during their time slice instead of being allowed to do more writes. InterSystems recommends setting this parameter to 10
You can set these variables by adding the following to your /etc/sysctl.conf file:
vm.dirty_background_ratio=5 vm.dirty_ratio=10
These changes force the Linux pdflush daemon to write out dirty pages more often rather than queue large amounts of updates that can potentially flood the storage with a large burst of updates.”
Oracle Solaris Platform Notes
Depending on the size of the database cache your deployment requires, it may be necessary to increase shared memory kernel parameters. See the Solaris Tunable Parameters Reference ManualOpens in a new tab for specific information on Solaris tunable parameters.
The Solaris 10 release no longer uses the /etc/system mechanism to tune the IPC shared memory parameters. These allocations are now automatic or configured through the resource controls mechanism.
If you try to use /etc/system on Solaris 10, you may receive the following message:
* IPC Shared Memory * * The IPC Shared Memory module no longer has system-wide limits. * Please see the "Solaris Tunable Parameters Reference Manual" for * information on how the old limits map to resource controls and * the prctl(1) and getrctl(2) manual pages for information on * observing the new limits.
See “Chapter 6 Resource Controls (Overview)” of the System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris ZonesOpens in a new tab on the Oracle web site for detailed information on using the rctladm, prctl, and projects commands to set Solaris 10 parameters.
The following subsections summarize several important configuration and tuning guidelines for a reliable, performing deployment of Caché on the Solaris ZFS filesystem:
The recommended minimum Solaris version is Solaris 10 10/08, which contains several key patches related to ZFS.
General ZFS Settings
You can adjust general ZFS parameters in the /etc/system file as follows:
-
zfs_arc_max
The ZFS adaptive replacement cache (ARC) tries to use most of a system's available memory to cache file system data. The default is to use all of physical memory except 1GB. As memory pressure increases, the ARC relinquishes memory. This amount needs to be adjusted downwards for optimal Caché performance.
Typically, this amount should be restricted to roughly 10% - 20% of the available RAM. So, if a system had 32GB RAM configured, you would set this to be either 3.2GB (10%) or 6.4GB (20%) by adding the following line to the /etc/system file:
-
3.2GB:
set zfs:zfs_arc_max=3435973837
-
6.4GB:
set zfs:zfs_arc_max=6871947674
-
-
zfs_immediate_write_sz
The ZFS Intent Log (ZIL) is used during write operations, and is an integral part of the ZFS infrastructure from a data integrity perspective. The ZIL behaves differently for different write sizes — for small writes, the data itself is stored as part of the log record; for large writes, the ZIL does not store a copy of the write, but rather syncs the write to disk and only stores (in the log record) a pointer to the synced data. The value of the large write is defined by the zfs_immediate_write_sz configuration item, and by default is 32KB.
Oracle has indicated that there are cases where large writes can result in data integrity issues when trying to recover from a crash. Specifically, the pointer(s) to the synchronized data stored in the log record may become corrupt, thereby rendering recovery impossible.
In order to limit exposure to recovery issues, Oracle has released a temporary work-around, which can be configured by adding the following line to the /etc/system file:
set zfs:zfs_immediate_write_sz=0x20000
This line characterizes a large write as 128KB (as opposed to the default of 32KB), thereby forcing all writes up to 128KB to be written to the log record.
Updates to the /etc/system file take effect after the next reboot.
ZFS Pool Configuration and Settings
For detailed information about the relevant commands for creating and administering pools and filesystems, see the Solaris ZFS Administration GuideOpens in a new tab. In addition, you should do the following:
-
General Options
Turn access time updates (atime) off for all configured ZFS pools and file systems (atime=off)
Configure record size as follows:
-
Database pool/file system(s): 8K (recordsize=8K)
Note:If you use databases with large (that is, greater than 8KB) block sizes, update the Database pool/file system to match the block size. For information about large block sizes, see “Considering Large Block Sizes” in the Configuring Databases section of the “Configuring Caché” chapter of the Caché System Administration Guide.
-
Journal pool/file system: 64K (recordsize=64K)
-
Write Image Journal pool/file system: 128K (recordsize=128K)
-
-
Separate the ZIL
The ZIL can be placed on a separate device (LUN) from the rest of the pool. The default configuration of ZFS is to place the ZIL across the same device(s) (LUNs) as the rest of the pool. Separating the ZIL to its own respective device can result in a significant performance boost, especially for the Caché Journaling.
Although the ZIL can be placed on a separate device at any time, it is best done at pool creation time with the following line:
# zpool create <pool> <pool_devices> log <log_devices>
The following web site details other ZIL specific commands: http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_onOpens in a new tab
Miscellaneous Solaris Settings
If a driver encounters a request larger than the maximum size of physical I/O requests (maxphys), it breaks the request into maxphys-size chunks. This value does not need to be specified on Solaris SPARC implementations, but it should be explicitly configured on Solaris x64 systems.
To configure the maximum size of physical I/O requests, add the following line to the /etc/system file:
set maxphys=1048576
SUSE Linux Platform Notes
This topic includes the information on the following adjustments:
Locked-in Memory
On Linux platforms, you can configure Caché to lock the shared memory segment in memory to prevent paging as described in the memlock entry of the Caché Parameter File Reference. If shared memory is allocated in Huge Pages, they are automatically locked in memory and no further action is required. Otherwise, see the Locked-in Memory section of the Red Hat Linux Platform Notes in this appendix.
Special Considerations
The following sections describe particular issues or tasks associated with specific platforms or kinds of installations:
Maximum User Process Recommendations
Ensure that the maximum user processes is set high enough to allow all Caché processes for a given user, as well as other default processes, to run on the system.
Journal File System Recommendations
To achieve optimal journal performance and ensure journal data integrity when there is a system crash, InterSystems recommends various file systems and mount options for journal files. For specific platform details see the UNIX® File System Recommendations section of the “Journaling” chapter of the Caché Data Integrity Guide.
HP-UX Considerations
See the HP-UX Platform Notes section of the chapter “Preparing to Install Caché” for detailed configuration information.
Processor
Following are the recommended processor requirements:
-
Configure memory for maximum interleaving.
On most systems, all DIMM slots should be filled with the same size DIMMs. Consult the documentation for your particular platform for precise rules.
-
On multi-cell systems, configure with no cell local memory.
-
Configure swap space equal to at least 1.5*physical memory.
-
Most current firmware.
Software and Patches
Following are the minimum software requirements:
-
HP-UX 11.23 or 11.31
-
MCOE on 11i v2
-
All current supplementary patches (use SWA to check)
-
SecurePath, PowerPath, or native 11i v3 fiber channel multipathing
-
HP-UX Software Assistant (SWA)
-
Glance
-
Disable kcmond (kcalarm -m off)
Storage
LUNs should be configured as follows:
-
Journal LUN — optimized for fast processing of continuous small (~1-4 KB block size) single-stream sequential writes - high IOPS
(RAID1+0, no parity-based RAID)
-
WIJ LUN — optimized for fast processing of large (256-KB block size), single-stream sequential write bursts
(RAID1+0, no parity-based RAID)
-
Database LUN(s) — optimized for random 8-KB reads/writes
(RAID1+0 strongly preferred)
-
Installation and other LUN(s) as required SecurePath/PowerPath/etc. should be configured to load balance using the SST (Shortest Service Time) algorithm, if available, or with preferred path.
In the interests of performance and recoverability, InterSystems recommends placing the primary and alternate journal directories on storage devices that are separate from the devices used by databases and the write image journal (WIJ), as well as separate from each other. For practical reasons, these different devices may be different logical unit numbers (LUNs) on the same storage area network (SAN), but the general rule is the more separation the better, with separate sets of physical drives highly recommended. Be sure to see Journaling Best Practices in the “Journaling” chapter of this book for important information about ensuring journal availability for recovery.
LVM/VxVM
LVM/VxVM striping should not be used. If a filesystem requires the use of multiple LUNs, use concatenation or, if striping is required, use only extent-based striping. If possible, avoid LVM/VxVM mirroring.
File systems (VxFS)
At least 3 separate filesystems are required:
-
write image journal (CACHE.WIJ) and databases
-
primary and alternate journal directories
-
installation and others as required
Software installation directories may be included in the WIJ filesystem. All filesystems should be created with: mkfs -F vxfs -o bsize=8192,largefiles [special].
Create /etc/vx/tunefstab, including the lines:
[block-device for wij] discovered_direct_iosz=512K,\
write_pref_io=256K,max_diskq=1M,max_buf_data_size=64K
[block-device for DBfs1] max_diskq=16M
[block-device for DBfs2] max_diskq=16M
.
.
.
The file must contain the /dev/... special device files used to mount the filesystems, not the paths to the filesystems themselves.
Mount all filesystems with the mount options "delaylog,largefiles". On VxFSV5, also include the mount option "noqio".
Do NOT use DirectIO (mount options "mincache=direct,convosync=direct") for the WIJ. Although DirectIO (or Concurrent IO on VxFSV5) may be used for database file systems, it can have an effect on write performance, which may cause problems with heavy write loads (for example, during batch updates); for additional information, see File System Mount Options on UNIX®/Linux Platforms in the “General File System and Disk/Storage Subsystem Configuration Recommendations” chapter of this book.
HP-UX pwgrd Daemon
HP-UX distributions run a daemon (pwgrd) that caches password and group information from network queries. InterSystems does not explicitly enable or disable it, but prior to the 9/2008 Release of HP-UX 11iV3, it was enabled by default; beginning with that version, however, it is disabled by default. As a result of pwgrd being disabled, when installing Caché with a network user as the instance owner, users may see the following error: chown: unknown user id integ. To prevent the error from occurring, you should enable pwgrd.
HP-UX Kerberos Client Requirements
For Kerberos to work properly on the HP-UX 11i platform you must have the Kerberos Client (KRB5CLIENT) version 1.3.5. Verify you have the latest upgrade for HP-UX 11i v2 that includes this version. HP-UX 11i v3 includes the proper version of the client.
HP-UX Random Number Generator
Caché requires the HP-UX Strong Random Number Generator component for true entropy for its cryptographic random number generator. HP-UX 11i v2 now includes this component by default.
HP-UX fastbind Requirements
If you relink the cache binary on HP-UX or upgrade HP-UX after installing Caché, run the following command on the cache binary in the install-dir\bin directory:
$ /usr/css/bin/fastbind ./cache
HP-UX chown Requirements
InterSystems recommends that your restrict HP-UX file and directory permissions. To do this:
-
Add the following line to the /etc/privgroup file
-n CHOWN
-
Once you have done this, perform either of the following two actions:
-
Run setprivgrp -f /etc/privgroup
-
Reboot the system
-
IBM AIX® Considerations
The default settings of several AIX® parameters can adversely affect performance. For detailed information on the settings and recommendations, see the AIX® Platform Notes section of the chapter “Preparing to Install Caché”.
System Requirements
For information about current system requirements, see the “Supported Technologies” chapter of the online InterSystems Supported PlatformsOpens in a new tab document for this release.
Required C/C++ Runtime Libraries
You must ensure that the required C/C++ runtime is installed on your IBM AIX® system before installing Caché.
Caché for AIX is compiled using the IBM XL C/C++ for AIX 13.1 compiler. If the system on which you are installing Caché does not have the corresponding version of the runtime already installed, you must install these three runtime file sets from runtime package IBM_XL_CPP_RUNTIME_V13.1.0.0_AIX.tar.Z:
-
xlC.aix61.rte 13.1
-
xlC.rte 13.1
-
xlC.msg.en_US.rte 13.1
If these files are not present, Caché installation will not complete.
Full information about and download of this package is available at IBM XL C/C++ Runtime for AIX 13.1Opens in a new tab.
Shared Library Environment Variable for Caché Engine Link Libraries
The Caché Engine link libraries contain a batch file that references any installed C linker.
If you have either the standard UNIX® C libraries or any proprietary C libraries defined in the LIBPATH environment variable, then your environment is ready.
If not, append the paths for the standard UNIX® C libraries to LIBPATH; these paths are /usr/lib and /lib.
Use of Raw Ethernet
In order to use raw Ethernet, an IBM AIX® machine must have the DLPI (Data Link Provider Interface) packages installed. If the machine does not have the DLPI packages, obtain them from your IBM provider and create DLPI devices through the following procedure:
-
Log in as root.
-
In the PSE drivers section of the /etc/pse.conf file, uncomment the four lines that refer to the DLPI drivers.
-
Save the file.
-
Restart the computer.
If the DLPI devices are not installed, the EthernetAddress() method of the %SYSTEM.INetInfoOpens in a new tab class returns a null string rather than information about the Ethernet device.
Red Hat Linux Considerations
The following considerations may apply to your environment:
-
The default shared memory limit (shmmax) on Linux platforms is 32 MB, which is too small to install or run Caché. If the installation fails, you can change the value interactively in the proc file system (see the Red Hat Linux Platform Notes section of Calculating System Parameters for UNIX®, Linux, and macOS for more information), then reinstall Caché. The new memory limit remains in effect until you restart the Red Hat Linux system.
Alternatively, you can change the value permanently by editing the /etc/sysctl.conf file, which requires a restart of the Red Hat Linux system for the new value to become effective.
-
On Linux platforms with sufficient Huge Pages available, the Caché shared memory segment will be allocated from the Huge Page pool. A beneficial consequence of using Huge Pages is that the Caché shared memory segment will be locked into memory and its pages will not be paged out. See the section “Support for Huge Memory Pages for Linux” in the section Calculating Memory Requirements in this chapter for information about allocating Huge Pages.
-
To use Kerberos on the Red Hat Linux platform, you must install the krb5-devel package in addition to the krb5-libs package. Installing krb5-devel establishes the required symbolic links for using Kerberos. The package is required for production environments, not only development environments. See the Red Hat NetworkOpens in a new tab web site for more information about these components.
-
Red Hat Enterprise Linux V4 requires Websphere MQ version 7.0 to use the MQ interface.
Oracle Solaris Considerations
Using Kerberos on Solaris SPARC Release 10 requires two Patch IDs 120469-03 and 121239-01. You can download these patches from the Oracle Support website.
If the Ethernet adapters are protected against access by nonroot users, the EthernetAddress() method of the %SYSTEM.INetInfoOpens in a new tab class invoked by a nonroot user returns a null string rather than information about the Ethernet device.
If you install Caché in a non-global zone, you must perform the following additional configuration steps:
This procedure must be performed while in the global zone.
-
Assure that /usr/bin and /usr/local subdirectories in the non-global zone have write permission, as shown in the following steps:
-
Create the subdirectories (with write permission) in the global zone, as shown in the following example:
bash-3.00# mkdir -p /export/zones/test-zone/local bash-3.00# mkdir -p /export/zones/test-zone/bin bash-3.00# chmod 700 /export/zones/test-zone/local bash-3.00# chmod 700 /export/zones/test-zone/bin
-
Configure the /usr/bin and /usr/localsubdirectories (with read-write permission) for the non-global zone to use the subdirectories created above in the global zone, as shown in the following example:
bash-3.00# zonecfg -z test-zone zonecfg:test-zone> add fs zonecfg:test-zone:fs> set dir=/usr/local zonecfg:test-zone:fs> set special=/export/zones/test-zone/local zonecfg:test-zone:fs> set type=lofs zonecfg:test-zone:fs> set options=[rw,nodevices] zonecfg:test-zone:fs> end zonecfg:test-zone> verify zonecfg:test-zone> commit zonecfg:test-zone> exit bash-3.00# zonecfg -z test-zone zonecfg:test-zone> add fs zonecfg:test-zone:fs> set dir=/usr/bin zonecfg:test-zone:fs> set special=/export/zones/test-zone/bin zonecfg:test-zone:fs> set type=lofs zonecfg:test-zone:fs> set options=[rw,nodevices] zonecfg:test-zone:fs> end zonecfg:test-zone> verify zonecfg:test-zone> commit zonecfg:test-zone> exit
-
Copy all binaries to the newly created /usr/bin subdirectory for the non-global zone to ensure that the non-global zone boots properly, as shown in the following example:
bash-3.00# cp -rp /usr/bin/* /export/zones/test-zone/bin
-
-
For ECP connections to work properly in non-global zones, proc_priocntrl privileges must be specified within the zone, as shown in the following example:
bash-3.00# zonecfg -z test-zone zonecfg:test-zone> set limitpriv=”default,proc_priocntl” zonecfg:test-zone> verify zonecfg:test-zone> commit zonecfg:test-zone> exit
See the Oracle Solaris Platform Notes section of the chapter “Preparing to Install Caché” for more information.
SUSE Linux Considerations
The following considerations may apply to your environment:
-
The default shared memory limits (shhmax and shmall) on SUSE Linux 32-bit platforms are too small for Caché, and can be changed in the proc file system without a restart.
-
On Linux platforms with sufficient Huge Pages available, the Caché shared memory segment will be allocated from the Huge Page pool. A beneficial consequence of using Huge Pages is that the Caché shared memory segment will be locked into memory and its pages will not be paged out. See the section “Support for Huge Memory Pages for Linux” in the section Calculating Memory Requirements in this chapter for information about allocating Huge Pages.
-
To use Kerberos on the SUSE Linux platform, you must install the krb5-devel package in addition to the krb5-libs package. Installing krb5-devel establishes the required symbolic links for using Kerberos. The package is required for production environments, not only development environments. See the SUSE documentationOpens in a new tab web site for more information about these components.
See the SUSE Linux Platform Notes section of the chapter “Preparing to Install Caché” for detailed configuration information.
macOS Considerations
For the cinstall script procedure, see the section “Performing a Caché UNIX® Installation” in the chapter Installing Caché on UNIX®, Linux, and macOS in this book.