Original Publication Date: 08/29/2013
This release note documents the version 6.3.0 release of the ARX software. We recommend this release for those customers who want the fixes and enhancements listed in Fixes and Enhancements in This Release.
This release is cumulative, and includes all fixes and enhancements released since version 5.0.1. You can apply the software upgrade to 5.0.0 and later.
Note: F5 offers general availability releases and general sustaining releases. For detailed information on our policies, refer to Solution 8986, F5 software life-cycle policy.
In addition to these release notes, the following user documentation is relevant to this release.
These manuals are available from the ARX GUI or CLI. From the GUI, click on the Documentation link in the navigation panel. From the CLI, use the show software command for a complete listing of the ARX manuals, then use the following command to upload the manual from the ARX:
copy software manual-name destination-url
You can also find the product documentation on the ARX version 6.3.0 Documentation page of the F5 Online-Knowledge Base web site, along with an extensive solutions database.
The minimum supported browsers for the ARX Manager GUI are:
This release supports the following ARX platforms:
Refer to the ARX Compatibility Matrix for a complete list of vendor equipment that is certified for use with this release. Refer to AskF5 solution 10909.
The following third-party software packages are relevant to SCAP-compliant security scans of the ARX, from products like Retina:
For an existing installation, you can upgrade to 6.3.0 from any of the following releases:
For installation instructions, refer to the Upgrading Software chapter of the CLI Maintenance Guide.
If you must upgrade from an earlier Release (such as 4.1.3) or an interim release (such as 5.2.2), upgrade both peers to one of the above 5.x releases before upgrading them both to the current release.
In a redundant pair, you must invoke an additional failover after both peers have been upgraded. The final failover, with both peers at 6.3.0, makes it possible to perform the final upgrade of internal databases and enable the dependent features.
When upgrading an ARX 1500 or ARX 2500 from a release prior to Release 6.2.0, a metalog latency trap may be raised during the upgrade process. The trap should clear within minutes of completing the rolling upgrade.
To install a new ARX system, refer to the ARX's installation manual. A hard copy is included with each hardware platform, and an online copy of the ARX-VE Installation Guide is available with its OVF file.
For upgrade instructions, refer to the Upgrading Software chapter of the CLI Maintenance Guide.
Once you install the software, refer to the Required Configuration Changes section, which contains important information about activating your license. You must do this before using the new software.
If you upgraded an ARX-2500 chassis from a release prior to 6.2.0, we also recommend that you use the no resource-profile legacy CLI command to take advantage of the performance improvements in Release 6.2.0. Refer to Improved ARX-2500 Performance with New Resource Profile for details about this feature.
Downgrades are not recommended. Contact F5 Support if you feel you need to downgrade to an earlier software release. For detailed instructions, refer to the Upgrading Software chapter in the CLI Maintenance Guide; this contains a section specific to downgrades.
Release 6.3.0 includes several new features and fixes, described in the sections below.
Release 6.3.0 includes the following new features:
Support had been added for the case when a Windows client uses offline access for a file on a remote CIFS share and Windows creates a local copy of the file for the client. The client can use that local copy whenever the client machine is disconnected from the CIFS share, and can later sync the local copy with the original whenever the CIFS-share connection is up.
By default, clients can select directories and files for offline access manually. You can use the export offline-access command to also automatically enable offline access for any file the client opens (with or without network optimization), or to disable all offline access.
To accommodate the CIFS offline access/client-side caching feature, the section, Controlling Access to Offline Shares has been added to the ARX CLI Storage-Management Guide. Consult also the ARX CLI Reference Guide entry for the export offline-access command.
Volumes that support CIFS have additional possible naming-collision obstacles, caused by back-end servers keeping an extra name for some files and directories. Support has been added for the ARX to handle these naming collisions transparently.
This ability was added in response to bug 383020.
For supporting details on this issue, consult the ARX CLI Maintenance Guide. Specifically, see Managing Collisions With CIFS 8.3 Names.
Snapshot support has been added for Hitachi HNAS (powered by BlueArc) platforms. ARX volumes can now coordinate snapshots (point-in-time copies) of all back-end shares on this platform (CIFS only).
Snapshot browsing is also supported on this platform.
An enhancement was made to a pre-existing ARX daemon to periodically poll and compare peer LACP configuration with the local ARX. The enhancement includes raise and clear traps specific to an ARX channel number as well as channel-specific warning logs containing the details of the LACP mismatch.
You can now use the offline command to choose files based on the setting for the CIFS offline attribute.
You can use such a fileset to identify all files with this CIFS attribute and migrate them (or avoid migrating them) accordingly.
The offline setting is the only supported CIFS attribute in a CIFS-attribute fileset. This type of fileset is available to all namespaces and volumes that support CIFS. It is not available in an NFS-only namespace.
This ability was added in response to bug 376364.
Release 6.3.0 adds the following fixes to the ARX:
Some Solaris clients can specify size limitations for NFS RPCs. When this happens and filers respond with RPCs larger than this limitation, the ARX does not "trim" the RPC, resulting in the Solaris client not receiving the RPC response. This happens only on ARX-1500 and ARX-2500 on releases after 6.0.0.
NFS snapshots do not support the behavior set with the offline-behavior ... deny-access command. The deny-access behavior is to return an NFS error for each file that is offline (that is, for each file on a back-end filer that is unreachable). The command has no effect when an NFS client is browsing snapshots; the ARX never returns an error (or any response) for snapshot queries to offline filers, possibly causing an NFS-client application to hang.
The corner case where a customer attempts to replace an ARX, neglects to activate its license, and allows the ARX to continue running in the unlicensed state has been fixed in this release. Note that connecting an unlicensed ARX device with a running-configuration to the network could significantly impact client performance.
If a saved running-config was executed on an ARX that was recently upgraded from a pre-5.3.0 release to higher-versioned release, an additional configuration record would be created that was associated with the snmp- server traps command.
Because only one record was expected by the ARX software, unpredictable behavior resulted when sending traps. The code was modified to work only with the record having a certain primary key. If another record exists, it is ignored.
An issue where incomplete configuration of a filer's secondary IP addresses could cause connection failure between the ARX and the filer and, eventually, cause the ARX to run out of Radix resource for file server transaction IDs has been fixed.
To establish LACP on a channel for ARX-500, ARX-2000, and ARX-4000, use lacp passive on the ARX and lacp active on the peer. If you connect two ARX peers over a channel in a redundant configuration, use lacp passive on both ARX peers to establish LACP. One of them assumes the active LACP role automatically.
To establish LACP on a channel for ARX-1500 and ARX-2500, use lacp active on the ARX and lacp passive on the peer. If you connect two ARX peers over a channel in a redundant configuration, use lacp active on both ARX peers to establish LACP. One of them assumes the passive LACP role automatically.
On the ARX-1500 and the ARX-2500, an SNMP walk shows incorrect usage statistics for the ARX processors. Specifically, the numbers for "usageLast1minute" and "usageLast5minutes" are incorrect for each processor.
Workaround: Use the show processors CLI command to find the correct usage numbers for ARX-1500 and ARX-2500 processors.
An issue with an inability to import a new share has been fixed. The reason the new share would not import is that an import of a previous share failed and that share was removed from the volume after it failed to import.
ARX subshare synchronization is now more resilient in cases where an administrator places a backend share "myshare" to coincide with an existing ARX-generated share, say "_acopia_myshare_42$". This unexpected backend filer change formerly caused a subshrmgtd core; now the daemon can effectively recover from this situation.
FP_LOOKUP_ERROR: "Unknown Ecode Type"
Messages of this type can safely be ignored.
Other platforms (ARX-2000, ARX-4000, etc) have always idled out CIFS client connections after 15 minutes of inactivity. Now ARX-1500, ARX-2500, and ARX-VE platforms will do the same.
Workaround: This could potentially exhaust client connections if there are a lot of idle connections that don't terminate themselves. There is a workaround to identify those idle connections and terminate them manually.
Snapshot browsing is supported for EMC, NETAPP, and Data Domain filers.
This isn't a problem until the user wants to remove the namespace with the remove service command. The command fails with the error EXT_FILER_IN_USE.
Workaround: If the DR cluster reporting EXT_FILER_IN_USE is the disabled cluster, the clear global-config command can be used to remove all global configuation. Otherwise, all except the external filer reported in the EXT_FILER_IN_USE error can be removed with step- by step removal of the service in the following order: CIFS service, global server, namespace, and associated external filers.
The current release includes the fixes and enhancements that were distributed in prior releases, as listed below. (Prior releases are listed with the most recent first.)
Release 6.2.0 included several new features and fixes, described in the sections below.
Release 6.2.0 included the following new features, also included in this release:
ARX v6.2.0 provides support for new ARX-1500 and ARX-2500 platforms; specifically new disk drive trays. The new platforms also include fans that run at a faster speed. You may notice that the units run louder and produce a greater air flow.
The new platforms are functionally equivalent to existing ARX-1500 and ARX-2500 platfiorms.
In support of the new platforms, consult the following documentation:
These documents are included in your 6.2.0 release; you can retrieve them from the GUI or download them from the CLI.
Volume software shares memory and other resources with other volumes in the same volume group. Each volume group is a failure domain; a catastrophic failure in one volume may affect other volumes in the same group, but volumes in other groups are insulated from any such failure. Release 6.2.0 allows you to migrate a volume from one volume group to another.
Release 6.2 offers an operation for designating a new IP address for a back-end filer/server. You can use the ip address ... change-to command to specify one or more IP addresses for your external-filers, then use the ext-filer-ip-addrs activate command to reboot the ARX and activate all the address changes. If the ARX has a redundant peer, this command reboots both peers. This causes a service outage, so you should run the command during non-busy hours. The CLI prompts for confirmation before rebooting; type yes to proceed.
After the delegation settings for a CIFS service change at a domain controller (DC), the ARX software can take up to 10 minutes to synchronize with the change. Delegation settings determine whether or not the ARX CIFS service is allowed to authenticate a client once on behalf of all CIFS servers behind it, unconstrained (to any other CIFS server) or constrained to a specific set of CIFS servers. Release 6.2.0 introduces an operation to synchronize the ARX immediately with these Active-Directory changes. You can use the sync cifs delegation CLI command or its GUI equivalent to invoke this operation.
Release 6.2.0 contains performance improvements for the ARX-2500 with the new resource profile feature. This feature dedicates three separate hardware cores to network processing. The former resource profile divided the processing among four virtual cores, but these cores shared their hardware resources. Upgraded ARX-2500 devices retain their pre-6.2.0 "legacy" profile. You can use the no resource-profile legacy CLI command to upgrade to the new profile. This is recommended as a best practice.
It is necessary to reboot the ARX-2500 after executing the resource-profile command in order for it to take effect. (For a redundant pair, you must make certain to reboot both ARX-2500s; use the dual -reboot command to do this.) This is true also when replaying a saved configuration.
The ARX software now supports virtual snapshots for Data Domain filers running Data Domain OS 5.
The ARX now supports the migration of very large files while snapshots are being executed on the containing volume. Previously, in-process file migrations were cancelled when it was time to execute a snapshot.
The policy migrate-method CLI command has been made available to enable you to control the method the ARX uses to migrate files. It has two options: staged (the default and recommended behavior) and direct (not recommended).
The default behavior, staged, makes the policy engine migrate each file to a hidden staging area at the destination share, and then move the file to its final name and location. This method succeeds while the volume is taking snapshots, with a modest performance penalty.
An alternative method, direct, is available but is not recommended. Direct migration makes the policy engine migrate all of its files directly to their destinations. If a snapshot occurs in the middle of a direct migration, the migration is cancelled and must be restarted from the beginning on any later migration attempt. If the file is large enough to require a very long migration time, regular snapshots could prevent the file from ever fully migrating. However, direct migrations may be faster than staged migrations sometimes, especially in a volume that migrates large numbers of small files.
The ARX now supports the presentation of NFS snapshots and the ability to browse them via the ARX's virtual IP address.
F5 Networks now explicitly supports "stretch clusters." Stretch clusters are redundant ARX pairs in which the two chassis in the cluster are separated geographically, but in which the redundancy interface is a direct connection between the two ARX chassis without an intervening network switch. For the ARX-2500, long-reach optical connectors and short-reach copper connectors are available for supporting low-latency direct connections over the switch's 10-Gigabit Ethernet interfaces.
Administrators must recognize that ARX performance for stretch clusters degrades very significantly as latency between the ARX peers increases. The actual performance that you witness will vary according to the ARX models that you use, the geographical distance between them, the protocol in use, the nature of the files and directories that are managed, and the operations that are performed on those files and directories. During our own testing of this feature, F5 Networks observed typical performance degradation of about 30% for 0.2 ms latency between peers, degradation of about 90% for 1 ms latency between peers, and degradation of about 95% for 2 ms latency between peers.
The show redundancy metalog CLI command is useful for monitoring the connection between redundant peers that are separated by long distances. The output from this command includes the latency between the two peers, measured in microseconds.
If a pre-win2k domain name, used when the environment does not use Active Directory or does not support Kerberos authentication, is not configured explicitly, the pre-win2k domain name now is discovered automatically during AD configuration and/or AD discovery. In the unlikely event that a pre-win2k domain name is not identified at that time, the ARX will derive a pre-win2k domain name as a last resort. This is done by truncating the FQDN to the first 15 characters before the first period.
Kerberos authentication requests now are distributed (load-balanced) across all of the online DCs in a domain. The set of domain controllers across which the requests are load-balanced is the set of all DCs that are preferred and online, or, if no preferred DCs are online, the set of all DCs that are non-preferred and online.
It is now possible to migrate hard links off of a share, using the source command without any fileset to drain all of the files and directories off of the share, or, alternatively, you can use the new migrate hard-links CLI command to enable migration of files that match a fileset and have hard links.
Hard link migration is disabled by default, and is configured as "no migrate hard-links".
This feature is for NFS services only.
You can now display a summary of a managed volume's directory structure without having to run an actual metadata report, using the new CLI command nsck report dir-structure". This command generates a directory structure report that you can view later.
It is possible now to manage SSL certificates from the ARX Manager GUI, and support for this function is provided now in the ARX CLI as well. This includes the ability to regenerate a self-signed SSL certificate, import a CA-signed certificate and CA certificate chain, and modify the SSL cipher suite used by the ARX to negotiate SSL connection parameters.
Software release files now can be uploaded to the ARX using HTTP and HTTPS via ARX Manager.
The ARX now enables you to configure behavior for offline NFS filesystems. The new CLI command, offline-behavior, enables you to specify a "deny-access" setting that allows offline NFS exports to return an access error, causing the NFS client to mark the export with "Permission denied". The NFS request will continue its operation, accessing those NFS exports that are online, without the request hanging while it awaits a response from the offline NFS filesystem.
The default setting for the command, offline-behavior retry, configures the same behavior that was exhibited at all times in earlier releases, in which the the client keeps retrying the request and waiting for that export to respond. This may result in the the request hanging indefinitely.
The Common Operations page in the ARX Manager GUI, as well as its constituent tabs, has been revised and enhanced to provide more streamlined access to a variety of frequently-used configuration tasks.
Release 6.2.0 added the following fixes to the ARX:
Files now stay on the same share as the parent directory, but the directories are placed round-robin in the farm.
If any of the shares are offline, then no results should be returned or we risk returning partial results. If the client's cache contains partial results, then the client will not recover correctly when the share comes back online.
The definition of host switch included in the description of the show namespace command in the ARX CLI Reference has been clarified as, "typically the ARX peer where the volume was originally created."
Fixed a CIFS issue where message ids used to send RPCs within the ARX were incorrectly being returned to the pool that tracked client message ids. This would cause an assertion in the NSM code and a subsequent crash.
The ARX software now collects export usage statistics more efficiently when a client requests that usage information, reducing delays when other clients access files or folders in the share.
A problem that caused samrefoffline traps to be sent following an upgrade of the ARX software has been fixed. The standby ARX no longer performs SAM filer probes, and the SNMP trap will be sent now only after four consecutive failures.
The ARX Manager GUI now executes snapshots correctly when the snapshot interval is changed. Previously, changing the snapshot interval from, for example, every four hours to every six hours, caused the next snapshot to be executed six hours from the time the interval was changed rather than six hours from the previous snapshot.
The description of collision handling with CIFS "8.3" filer-generated names in the ARX CLI Maintenance Guide has been clarified to emphasize that the ARX does not support 8.3 FGNs, and 8.3 FGN creation should be disabled on back- end filers.
A problem that caused metadata inconsistency for folders created via the ARX has been fixed. A folder created via the ARX and subsequently renamed via the ARX to use an "8.3" name was not renamed on the back-end filer itself, resulting in a metadata inconsistency.
The show exports operation generally needs admin-level credentials to read ABE settings for filer shares. If run under lesser credentials, ABE was always reported as not set. Now it is reported as not available, to distinguish this from the actual state of not being set. '?' in the attributes table identifies the not-reported state.
The description of the Role, the type of process that typically runs on one processor core or another, now appears correctly in appropriate topics in the ARX Manager online help. Previously, this description was absent.
When redundant ARX pairs are in use, F5 Networks recommends strongly that you configure a link-aggregation channel for the redundancy link to ensure resilient and optimized performance.
The execution of the clear global-config command on a replacement ARX during the replacement rendezvous caused a problem in which IP addresses in that previous configuration might be marked incorrectly as not in use when they still were in use, resulting in new services not starting correctly. This problem has been fixed.
The ARX Manager GUI Initial Setup wizard now requires only two proxy IP addresses for configuration of an ARX-1500. Previously, this wizard incorrectly required you to provide four proxy IP addresses for an ARX-1500.
The ARX now allows users to join domains using all username formats supported by Windows. Previously, the ARX did not support the use of usernames registered in different domains, or of full Kerberos names in the same domain.
The entry for the import priority command in the ARX CLI Reference now states explicitly that the first-configured share is assigned the master for the volume.s root directory, and the import priority does not change this. Read the complete entry in the ARX CLI Reference for detailed guidelines.
ARX error messages have been improved to provide better notification if an attempt is made to replace an ARX in a pair in which the replacement switch and its peer have a software version mismatch.
The section titled, "Replacing a Redundant Peer" that is present in all ARX Hardware Installation guides has been revised to describe more clearly all considerations involved in the replacement of an ARX in a redundant pair configuration.
For the ARX-VE, the ARX-1500, and the ARX-2500, an snmpwalk operation on the ifTable returned internal interfaces (such as "dummy," "bond," and "loopback") along with relevant interfaces. Now the same snmpwalk returns only the interfaces that are visible with the show interface summary CLI command.
Release 6.1.1 was a maintenance release including a number of new fixes, described below:
During an ARX rolling upgrade from a release prior to 6.0.0, the "CIFS browsing" portion of the upgrade sometimes triggers GSMD (Global Service Manager) to enter a tight loop on the backup ARX.
This issue was preceded by numerous transaction conflicts on the Exports table and was related to ARX configurations containing a large number of CIFS exports. Conditions were added to assure that only the active ARX will perform the CIFS browsing upgrade and for it to do so prior to starting services.
During an ARX rolling upgrade from a release prior to 6.0.0, the "CIFS browsing" portion of the upgrade sometimes triggers GSMD (Global Service Manager) to enter a tight loop on the backup ARX.
This issue was preceded by numerous transaction conflicts on the Exports table and was related to ARX configurations containing a large number of CIFS exports. Conditions were added to assure that only the active ARX will perform the CIFS browsing upgrade and for it to do so prior to starting services.
In large subshare configurations, the subshare management daemon's remote procedure call (RPC) response thread could cease to service RPC requests during periodic OMDB cleanup activities.
This cleanup work has now been moved to a separate thread, so that RPCs can always be serviced in a timely manner.
A metadata corruption problem that caused core files to be generated has been fixed. The problem occurred when a single filesystem was exported via multiple nodes and exported as metadata shares and subsequently destaged.
The severity of the "xiplip-inconsistency-raise" event has been lowered to Warning from Critical, and its associated description has been clarified to describe the event's possible causes more explicitly.
A spurious error message, WMI-0-ERR-WMI_DESTINATION_UNREACHABLE_FAULT, has been removed from the software, and no longer will appear in logs. Previously, this message appeared unnecessarily for some file server configurations.
If you removed an ARX share with more than 38 characters in its name, the ARX created a place rule (to drain the share) with a name length greater than 64 characters. This exceeded the maximum length for a rule name, thereby making it impossible to delete the rule through the CLI. Now rule names can be up to 1024 characters.
Certain database corruptions cause progressively slower database-response times, and an internal database-cleanup process was timing out before it cleaned the database. In one case, the progressively-slower database resulted in virtual-service outages after an upgrade. The timeout for the database-cleanup process is now long enough to work around this issue and repair database corruptions.
A problem with the count form of the ip proxy-0 command that caused proxy IP address assignments to be created out of order, and resulting subsequently in the creation of core files, has been fixed.
A problem that caused a single logical IP address to be associated with multiple proxy IP addresses has been fixed. In addition, the SNMP trap that was raised when this occurred has been corrected to refer to logical IP addresses rather than external IP addresses.
Release 6.1.0 included several new features and fixes, described in the sections below.
Release 6.1.0 added the following features to the ARX:
A new stats-monitor process now runs in the ARX software, monitoring the time taken for requests to filers, requests to clients, requests to other external devices, and internal processing. If the times increase by a wide-enough margin over a long-enough time, the stats-monitor places an alert message into the ARX syslog file. You can use the stats-monitor CLI command to enter a new CLI mode, where you can enable SNMP traps for each of these alerts. You can also change the alert thresholds from this mode. These statistics are logged in corresponding files, and their running histories can be displayed.
The ARX collects all of the statistics by default, but no analysis is performed until the stats-monitor is configured. As such, the corresponding alerts are not displayed in the syslog by default.
The stats-monitor command and its sub modes are beta-level software in Release 6.1.0. They have not been tested as rigorously as other 6.1.0 features. You use the terminal beta command to reveal all beta-level commands, including this one.
Support has been added for multi-protocol configurations in which ARX managed volume shares are imported from NetApp filers at the NetApp volume level (as opposed to at the Qtree level).
In previous releases, migrations would fail in such configurations. This increased support does not entail any command changes or additions.
It is possible now to display the free space for a volume in the context of the user account and path used to access the volume. This enhancement supports path-based quotas. The freespace cifs-quota CLI command has been added to gbl-ns-vol mode, enabling volume free space to be displayed based on the credentials of the user executing the command and the path by which the volume is accessed.
This behavior is disabled by default (no freespace cifs-quota), causing the system-wide free space algorithm to be used, as was the case in previous releases. Use the filer-subshares CLI command to enable CIFS subshares prior to using this feature; refer to the ARX CLI Storage-Management Guide for complete instructions.
Note that this command pertains specifically to CIFS clients, and has no effect upon NFS queries.
The functionality for reporting volume size and free space has been enhanced so that it is possible to display the free space only for the back-end file system that is being accessed, rather than for the entire managed volume. This is useful in cases such as file migration, in which the temporary existence of two filesystems for the one in migration could otherwise distort the results of volume size and free space reporting. This is accomplished via a new argument for the existing freespace calculation CLI command, dir-master-only, which causes the share in question to be queried only on the storage resource at which its master instance resides.
In addition, the freespace apparent-size CLI command has been added to gbl-ns-vol-shr mode, enabling an administrator to configure an artificial capacity value for the volume that is less than or equal to its actual capacity. This command is accessible only in dir-master-only mode.
This functionality has no effect on policies and shadow volumes, for which actual free space, not apparent size and free space, is in use at all times.
NSM warm restart now is supported on older ARX platforms with multiple cores (ARX-1000, ARX-2000, ARX-4000, and ARX-6000). This feature improves behavior associated with the older processor designs used by those platforms, and is not relevant to the newer ARX-1500 and ARX-2500 models.
One aspect of NSM recovery is that once an NSM reanimates, not all cores within the same processor can be Up again without reloading; this is inherent to the design of the NSM.s internal failover behavior. The system functions at a reduced level when this occurs. NSM warm restart functionality addresses this situation by restarting only the NSM core that failed. The other cores within the same processor remain unaffected. When restarted, the NSM comes back Up (not in Standby) and resumes its normal traffic load.
NSM warm restart functionality is disabled by default.
Execute the nsm warm-restart CLI command in config mode to enable NSM warm restart. The no nsm warm-restart CLI command disables the functionality.
Release 6.1.0 adds the following fixes to the ARX:
On an ARX 1500 or ARX 2500, it is possible that there may be a connection refused error immediately following the enabling of a share or during the failure of a share to import. (365313)
In the first case, simply execute the enable command another time. In the second case, remove the share and re-enable it.
A problem that caused virtual services to take four to five minutes to restart following a failover has been fixed by changing the ARX's management of TCP connections to metadata filers.
The CIFS protocol specification requires that filer-assigned tree connection identifiers be unique across a single tree connection. The Sun filer violated this rule, causing the ARX to be unable to connect more than one user session to any given share. The ARX has been changed to be less sensitive to strict protocol adherence in this regard.
In Release 6.0.0, RAID verification runs automatically every five minutes, which affects Metalog performance on ARX 1500 and ARX 2500. Once Release 6.1.0 is installed on ARX 1500 or ARX 2500, RAID verification runs once per day at 23:00, beginning the day after Release 6.1.0 is installed. On other ARX platforms, the RAID verification behavior will not change after Release 6.1.0 is installed. If the RAID verification mode is manual, and RAID verification has not been run on ARX 1500 or ARX 2500 in the last 24 hours, the traplog will show an entry reminding the user to execute RAID verification.
A problem existed that when volume configuration was edited via the GUI, the metadata shares in ARX disaster recovery clusters would become mis-configured. This problem manifested only in disaster recovery clusters and is fixed now.
If users activate the licenses on switches in the same redundant switch pair at different dates, the expiration times for these licenses will be different. License keys with different expiration dates that are used in a redundant ARX pair now cause a trap to be raised only if the difference in the expiration dates is greater than one day.
EMC servers support 3 or more colon (:) characters in their named-stream names. This naming convention is unsupported by other CIFS servers, so previous ARX releases could not migrate these named streams from an EMC server to another vendor's server.
Now, whenever an ARX migrates an EMC named stream with this naming issue, it renames the stream so that the migration can succeed. The ARX software keeps the first and last colon, and replaces each intermediate colon with '~'. For example, the ARX changes
at the target file server. The name remains the same at the EMC server.
In releases before 5.02.000, a CIFS-only managed volume occasionally included user- based and path-based storage quotas in its free-space calculations. For example, if a CIFS client had a quota of 5G on a back-end Windows share with 500G of actual space, the CIFS client only saw 5G for that Windows share. If filer subshares were configured in the ARX volume, path-based quotas were also taken into account on some occasions.
These quota-based free-space calculations were inconsistent, so they were eliminated in Release 5.02.000. Following that Release, all CIFS clients saw the full free space on all back-end shares behind their ARX volumes, regardless of any space quotas those clients may have had on the back-end Windows Servers.
The current release introduces the freespace cifs- quota CLI command, providing reliable support for the original behavior. With this option enabled, clients see the sum of the quota-based free space on the volume's back-end shares, not the sum of the full free space. Thus, CIFS clients with a 1G quota see only 1G of space, and connections to a CIFS subshare with a 5G quota see only 5G of space.
The remove-share command no longer fails due to an excessively short timeout value for obtaining free space information for the shares it is used to remove. The corresponding timeout value has been increased to prevent this from happening.
On ARX HA pairs, changes in global configuration data are synchronized to each ARX database with a two-phase commit, starting with the standby. A small time window exists where if the standby ARX reboots in the middle of a two-phase transaction, the active ARX might hold the transaction open (in limbo) due to uncertainty of the remote commit.
Limbo transactions now are rolled back on the active ARX if the standby is known to be down. In this case, it doesn't matter if the transaction was committed on the standby, because, when it pairs up with the active again, all of its global data will be resynchronized.
A problem related to the at schedule command and automatic retries caused by DB transaction conflicts has been fixed. The at schedule command now handles transaction retries independently of the automatic retry mechanism used by the rest of the CLI.
Shares are no longer marked as "Pending Import" when a recursive sync is performed on a volume that is in the same instance in which another volume is imported after the beginning of the sync operation.
When using the show redundancy quorum-disk command, the fractional part of a minute is no longer included in the most current interval. For example, if the most recent interval is 10:00 - 10:33, but the current time is 10:33:45, the last 45 seconds of data will no longer be included. This rectifies a problem in which heartbeat latency counts were not consistent.
The filer type windows cluster command has been added to configure a Windows Server 2008 (or later) file server explicitly as a cluster node. This fixes a problem in which the disabling of NetBIOS on Windows Server 2008 (or later) cluster nodes caused ARX snapshots to identify cluster nodes incorrectly as non-cluster nodes, and fail to create snapshots. The command also adds the capability to discover the Windows Server 2008 (or later) cluster through WinRM when the cluster is not set explicitly through the CLI.
A check is made now to see if the client connection state referenced by a transaction is still valid before error statistics are accessed and incremented. This fixes a crash that was caused by disconnection from the front end and subsequent attempts to increment error statistics associated with that client connection state.
A problem that prevented configuration of more than six channels on an ARX has been corrected. The ARX now allows configuration of up to eight channels, as described in the ARX user documentation.
Shadow volume functionality now creates the .acopia directory on Samba filers differently, creating the directory without specifying a security descriptor, and making and applying an appropriate ACL. This fixes a problem that caused shadow volume creation to fail when running SUSE Enterprise Linux 11 and Samba Version 3.4.3-1.19.1-2426-SUSE-CODE11.
You can use the domain-join operation to join an Organizational Unit (OU) within a Windows domain. The documentation and online help indicate that you should use a backslash character (\\) to separate the layers in a nested OU, but the domain-join previously failed with this separator. Now the domain-join succeeds with a backslash in the OU.
Support for multi-protocol namespaces in NFS services no longer allows exporting from different namespaces with inconsistent protocols. For example, exporting from namespace A with protocol NFS2 and exporting from namespace B with protocol NFS3 is not allowed now.
The Tiered Storage wizard in the ARX GUI now displays existing namespaces and volumes the first time it is invoked. Previously, existing namespaces and volumes did not appear in the wizard until it had been invoked a second time.
A series of proxy IP addresses (10.46.125.200/16 through 10.46.125.220/16) that are allocated for use on older ARX platforms are included in the command output when show ip proxy-addresses is executed on ARX-1500, ARX-2500, and ARX-VE. The output for this command on ARX-1500, ARX-2500, and ARX-VE now indicates that the corresponding MAC address for each proxy IP address as "Unresolved"; previously, an unused MAC address was shown for each.
Release 6.0.0 included the following fixes and enhancements, also included in this release.
Release 6.0.0 supports two new hardware platforms: the ARX-1500 and the ARX-2500. These are 1U devices. The ARX-1500 has eight 1-Gigabit interfaces, and the ARX-2500 has four 1-Gigabit interfaces and two 10-Gigabit interfaces.
Clients can now access multiple namespaces through a single Virtual IP (VIP) address.
As of Release 6.0.0, ARX feature functionality is controlled on each ARX by a license file that determines which features are active and which are disabled, according to the terms of the license agreement associated with that specific ARX. An ARX cannot be used without a valid license, and any customer upgrading to Release 6.0.0+ must obtain and activate a license in order to use the features and functions of the ARX.
Performance of certain directory enumeration operations for NFS clients has been enhanced by reducing the client-to-ARX network traffic by 50:1. This is noticeable especially if the NFS client's network path to the ARX is low bandwidth, high latency, or both.
Release 6.0.0 provides a number of supportability enhancements and improvements of the CIFS subsystem, including new probe commands and configuration enforcement.
NFS character encoding is the numeric encoding for the characters used in human language. In former releases, the ARX supported two character encodings for its NFS services: ISO 8859-1 and UTF8. As of Release 6.0.0, NFS on the ARX also supports EUC-JP and Shift-JIS, two encodings for Japanese, along with KSC5601.
Each ARX volume resides in a volume group, in which the group has a finite amount of memory and processing power to be shared among its volumes. Volume groups are a means of isolating namespaces in the ARX's memory so that the failure of one or more namespaces in one volume group does not affect the performance of namespaces in other volume groups.
In previous releases, volume groups were known as "VPU domains." The terms "VPU" and "VPU domain" are no longer valid as of Release 5.3.0. This change in terminology has been made to reflect more accurately the use of dynamic, virtualized resources in newer ARX models.
Release 6.0.0 adds the following fixes to the ARX:
The current controllers use firmware version "5.2-0"
The new controllers use firmware version "5.2-0".
The output from the ARX CLI command show chassis disk will show the firmware version and can be used to determine which controller the chassis is using.
There are downgrade limitations that should be considered because using releases or hotfix releases which are "unaware" of this new firmware version will not work properly with the new RAID controllers.
Before downgrading from this release, please consult F5 support for further guidance if your chassis has the new RAID controller.
A problem with the handling of internal logical IP addresses for redundant pairs of ARX-1500s and ARX-2500s that caused Mac OS clients to be unable to use their desktops has been fixed.
Now the operation succeeds after the above sequence.
If the backup ARX rebooted while the active ARX was processing a configuration change, the internal database process held open a transaction indefinitely. This open transaction resulted in a growing set of syslog messages, indicating that "Object Manager transaction id ... is now time-period seconds old." This also slowed the performance of other processes on the ARX.
A poorly-timed reboot of the backup ARX no-longer triggers these issues.
During a disk replacement in an ARX-2000 or ARX-4000, it was possible for the ARX to misinterpret the state of the failed disk and its replacement. When this occurred, the incorrect disk state blocked the raid rebuild operation. This prevented the overall disk replacement from succeeding.
The CLI Reference and CLI Storage Guide described an incorrect syntax for nested OUs. The context of this documentation is the domain-join operation, which joins an ARX CIFS service to a Windows Domain (or an OU within a domain). The documentation stated that you should separate each OU layer with a backslash (\) character; the CLI documentation now shows the correct forward slash (/).
Some ARX 500 units were released to the field without programmed serial numbers that should be displayed when the show chassis command is executed. The process of acquiring a base registration key for ARX software license activation has been modified to accommodate these units.
In former releases, there was no warning in the CLI or the ARX manager (GUI) when you entered the gbl CLI mode or otherwise started to edit the global configuration. Now there is an appropriate warning.
An ARX-VE disconnected a CIFS client that tried to copy a file to an ARX service with a particular configuration issue. This failure occurred when SMB signatures were enabled on only the back-end server and not the front-end CIFS service on the ARX.
When a redundant pair is unable to form due to a "Peer synchronization failure", the collect, show namespace, or show virtual service operations often took excessive time. This only occurred on the initial rendezvous for the redundant pair. Now these operations are slowed by only 1 to 4 minutes by these circumstances, as opposed to hours.
The ARX VIP PORTMAP service did not respond correctly for non-supported RPC service versions, which prevented the use of common RPC-query tools. For example, rpcinfo -t arx-vip 100003 (NFS) and rpcinfo -t arx-vip 100021 (NLM) both failed. Now, both of those commands succeed for any ARX VIP running an NFS service.
In a multi-protocol (CIFS and NFS) ARX volume with "euc-jp" character encoding, the ARX software arbitrarily changed filenames with tilde (~) characters. CIFS clients saw the name correctly, but NFS clients saw a different name. In this release, NFS clients and CIFS clients see tilde characters correctly in all filenames.
Previously, all 'filer-subshare' exports that shared out a single CIFS volume (or a multi-protocol volume supporting CIFS) were required to have unique names, even when the subshare exports were configured in different CIFS services. This restriction no longer exists.
Previously, the wizard included a selection that corresponded to a deprecated CLI command that the ARX software no longer recognized.
The circumstances were:
If all of the above conditions were met, the replaced peer (in the Backup role) was not able to rejoin the cluster.
The policy engine logged a syslog message whenever it refused to migrate a file to a share that was too full. (A share is considered too full when it has dropped to its minimum free space, set with the policy freespace command or its GUI equivalent.) These messages were correct, but excessive. Now the syslog only includes a summary message for many migrations at a time.
A problem that caused the CLI to generate a core file when the ssh-host-key rsa encrypted-hostkey command was invoked with command completion (i.e., ssh-host- key rsa encrypted-hostkey ?) has been fixed.
The ARX sometimes exceeded 2,048 simultaneous tree connections in a single TCP connection. Some back-end file servers cannot tolerate this number of tree connections per TCP connection.
An ARX-CIFS service issued a STATUS_INVALID_PARAMETER response whenever a client tried to copy an EFS encrypted file to the service. This resulted in an unclear error at the Windows-client application. Now the CIFS service returns a STATUS_ACCESS_DENIED error, which is more likely to be interpreted into an understandable error by client applications.
"press yes to continue, or r to restart"
This could be misinterpreted as a command for rebooting the ARX. Now the interview clearly states that "r" only restarts the interview script.
Release 5.3.1 included the following fixes and enhancements, also included in this release.
Release 5.3.1 introduced a production version of the new ARX Virtual Edition (VE) platform, which runs as a VM guest on hypervisor hardware. This is a software-only platform for the storage services of the ARX, with production-grade limits for maximum number of volumes, file-server shares, files, and so on. Release 5.3.0 was the trial version of the ARX-VE, which is also supported in the current release.
Minimum system requirements:
The ARX-VE requires the following resources from the hypervisor:
These are defined in the OVF template. Please contact F5 technical support prior to making any change to the settings in the OVF template.
Converting from the trial version to the production version: Converting an ARX-VE running Release 5.3.1 from a trial version to a production version requires you to purchase an ARX-VE production license from F5 Networks. Clear the evaluation license and enable the production license. You will be warned to increase the memory allocated to the ARX-VE to 4GB, and to provision a second CPU core. You will then need to reload the ARX-VE instance.
The ARX-VE supports a single interface (a VNIC on its hypervisor host), which is used for management as well as client/server traffic. It differs from other ARX platforms in the following general ways:
In addition, observe the following practices when operating the ARX-VE:
Release 5.3.1 added the following fixes to the ARX:
Release 5.3.0 included the following fixes and enhancements, also included in this release.
Release 5.3.0 supports the new ARX Virtual Edition (VE) platform, which runs as a VM guest on hypervisor hardware. This is a software-only platform for the storage services of the ARX, useful for demonstrations of ARX storage as well as for pre-staging for one of the ARX hardware platforms.
The ARX-VE supports a single interface (a VNIC on its hypervisor host), which is used for management as well as client/server traffic. It differs from other ARX platforms in the following general ways:
NOTE: You cannot apply upgrade releases or hotfix updates to this version of ARX-VE.
Release 5.3.0 added the following fixes to the ARX:
As a result of updates to the ARX Linux kernel, the file tracking copy of metadata to an NFS share no longer causes user processes to hang. It is no longer necessary to use a CIFS share when copying metadata to the file tracking archive.
This was a Maintenance Release for the 5.02.nnn series of software releases. It did not include any new features or enhancements. It contained the following fixes:
This issue applied to a disaster-recovery (DR) scenario, where the global configuration from a failed "active" ARX cluster was loaded onto a "backup" ARX cluster. If the active cluster had any wins-alias in its configuration, the load command failed on the backup cluster. Specifically, the load failed with an OM_RECORD_INSERT_FAILED error.
For further root cause analysis, code has been changed to collect additional debug information if the issue arises again.
If only a single user is assigned to the namespace, then the permissions act correctly.
Windows Explorer and other applications can poll an ARX CIFS service for changes in a given directory and its subtree. The ARX response to each poll is called a change notification. The ARX service must poll all of the CIFS servers behind it, so a poll of a full subtree significantly increases network traffic. By default, the ARX CIFS service therefore only responds with changes in the root of the directory. Under the following circumstances, the ARX CIFS service sometimes returned empty responses:
Copying an ARX release file from a namespace resulted in a spurious error, and the copy failed. The error was named FILE_COPY_SRC_NOT_REL, and it stated that the file requires a ".rel" extension. The error and failure occurred even when the file had the correct extension.
The snapshot operations failed due to the changed volume state.
Replacing a failed hard disk in an ARX-4000 occasionally resulted in a system crash that produced a core-memory file. The core was produced by /acopia/bin/chassnew, as shown by the show cores core- file backtrace CLI command. This release corrects the software fault that caused the crash.
In this case, the ARX changed the DC in the client's domain from "Active" to "Backup" status. Now it demotes the correct DC, from the intermediate domain.
The auto-migrate feature is supposed to migrate files off of one share in a share farm when the share is above a "maintain-free-space" threshold, and it is supposed to stop migrating off the share after its free space rises to its "resume-migrate" threshold. When the policy engine performed a free-space probe at the wrong time, it canceled the auto-migrate operation before the share dropped down to its "resume-migrate" threshold. Now the free-space probe does not cancel the auto-migrate operation.
375408, 375786, 376495, 375083
The reports from the nsck ... report inconsistencies operation included non-existent metadata inconsistencies, and the nsck ... rebuild operation created metadata inconsistencies between the ARX volume and its back-end shares. Both of these issues stemmed from the same software problem, which is now resolved.
When an ARX virtual snapshot takes longer than two minutes, the WinRM connection from ARX to the Windows file server may be reset by the Windows file server, causing the snapshot operation to fail. The ARX now re-establishes the WinRM connection and completes the snapshot operation with the new connection.
369794, 368478, 369209
An internal locking problem was corrected in a rarely encountered error path. Prior to this correction, if the error path was triggered, tasks within the ARX software could block indefinitely, leading to severely degraded user access to ARX-virtualized file systems.
The timeout for connecting to a domain controller, for NTLM/NTLMv2 authentication when constrained delegation is used, has been increased from 10-20ms to about 10 seconds. This will allow the authentication mechanisms to work as expected when ARX is configured with domain controllers that are slower than normal.
An SNMP get operation sometimes caused the ARX software to fail, produce a core-memory file, and reboot. The failure occurred only if there was at least one CIFS-authentication failure in the database, the failure was one of a small set of failures, and CIFS-authentication statistics were included in the returned data.
351863, 349196, 349933
NSM processors on the ARX-4000 sometimes failed to boot after an upgrade to 5.2.0 (or later) firmware. This was only an issue on an ARX-4000 in a redundant pair. Now the processors boot successfully after a firmware upgrade.
ARX performance was slower than it should have been in lossy networks. When network retransmits occurred due to external network issues, the retransmit algorithm caused extra time to be lost. The retransmit algorithm has been improved so that it stops exacerbating ARX-performance problems in lossy networks.
Some upgrades of redundant pairs failed with an error on the upgraded (backup) peer, visible in the show redundancy output. The error prevented the redundant pair from forming. The error cleared when both peers were upgraded. Now, the error no longer occurs on upgrade.
Displaying chassis information using the GUI sometimes displayed Disk Status as Good when the actual status was, in fact, Degraded. The criteria used to determine Disk Status has been modified to characterize that status more accurately as Optimal or Degraded.
341077, 344350, 348438
The no redundancy protocol CLI command now removes only external member ports from the private VLAN, and will not remove the internal ports assigned to that VLAN. In addition, the layer 2 software has been changed so that any attempt to delete the private VLAN is rejected. Also, the layer 2 software now includes additional instrumentation for diagnosing failures with IPC timeouts on SCM to NSM internal ports.
A problem that occurred when an attempt to mount a CIFS quorum disk failed has been fixed, and the corresponding logging has been improved to translate the CIFS error/status code to a text string.
Release 5.2.0 included the following fixes and enhancements, also included in this release.
Release 5.2.0 contained several supportability enhancements, CIFS enhancements and some other options designed to take advantage of popular file-server features. These are all described in the subsections below.
Release 5.2.0 contains several enhancements for CIFS front-end services. These are designed to tighten security and to improve ease-of-use for administrators.
An ARX-CIFS service delegates its storage services to the filers behind it. Release 5.2.0 supports constrained delegation, so that you can constrain the CIFS service to delegate only to the file servers it uses. (Former software releases allowed the ARX to delegate its services to any filer.) If you use constrained delegation, the namespace behind the service no longer requires an NTLM-authentication server to support NTLM or NTLMv2, and you no-longer need to install an ARX Secure Agent (ASA) on any of the CIFS service's DCs.
Constrained delegation is only possible for a domain if the Domain's functional level is Windows 2003 or later.
A domain administrator can upgrade an ARX-CIFS service to constrained delegation at a Windows Domain Controller (DC). On the DC, the administrator invokes the Active Directory Users and Computers application, finds the machine account for the ARX service, trusts the account for "delegation to specified services only," and identifies all of the CIFS servers behind the ARX. On the ARX, the probe delegate-to command provides a list of all CIFS servers behind a particular ARX service, and confirms that they are properly configured at the DC.
After upgrading a CIFS service to use constrained delegation, it is possible for the service to require a configuration change. Refer to Required Configuration Changes for instructions.
NOTE: Constrained delegation is more secure than unconstrained delegation, and requires that all of the CIFS service's filers be joined to the same Windows domain as the CIFS service itself. Also, all of the filers must support Kerberos authentication.
ARX CIFS services and namespaces can now support SMB signing. SMB signing is the process of adding digital signatures to every Server Message Block (SMB) between a CIFS server and its clients. This protects against man-in-the-middle attacks, but creates a performance penalty.
If you implement SMB signing at your site, the ARX can support it as needed. You can enable SMB signing between CIFS clients and one of your CIFS services, and you can enable SMB signing between an ARX namespace and all of its filers.
The ARX software uses its Active-Directory (AD) site to calculate its preferred Domain Controllers (DCs). In large AD deployments with multiple sites, the AD administrator can identify the sites and assign DCs and subnets to each site. The AD administrator performs this site configuration externally, on a DC. When the ARX automatically discovers the AD configuration (with the active-directory update seed-domain CLI command, or its GUI equivalent), it "prefers" DCs in the same site as its proxy-IP subnet. This removes the administrative burden of manually setting DC preferences on the ARX.
The subnet for the proxy-IP addresses must be assigned to an AD site for the automatic discovery to function properly. You assign a subnet to an AD site at the DC. Use the show ip proxy-addresses CLI command to find the subnet for your proxy-IP addresses.
This release includes several enhancements for the CIFS subshare feature. A CIFS subshare is any CIFS share that exists in the directory tree of an imported CIFS share. Given the correct configuration, a client connecting to an ARX subshare also connects directly to the corresponding file-server subshare, instead of connecting to the import share above it. This ensures that the file server uses the subshare ACL instead of the root-share ACL.
This release enhances the subshare feature by making it easier to manage, especially in large configurations:
Two former CLI commands, sync shares and cifs export-subshares, are now obsolete. They have been removed from the CLI, and their GUI counterparts have been adapted to the new operations.
In releases prior to 5.2.0, in ARX Manager, there was a Discover button for subshares in the Create Virtual Service Wizard. In 5.2.0, the subshare "discovery" mechanism was changed/enhanced to include an asynchronous operation that changes the back-end file server configuration by adding subshares to any shares that don't include them already. This enhancement was deemed too heavy to be done as part of a wizard, so the button was removed. The new work flow is to create your virtual service, then edit your managed volume, click the share tab, and click the Sync button. This will sync all back-end subshares and export the sub-shares out the virtual service.
The 5.2.0 release now supports replica-snapshot shares in managed volumes. A replica-snapshot share is a constantly-updated duplicate of one of the volume's standard shares. You use standard file-server-replication tools to copy the primary share's files to the replica-snapshot share, then the managed volume can snapshot the data at the replica-snapshot share. This allows you to keep a much smaller number of snapshots on the primary share. The managed volume's CIFS clients can access these snapshots through standard means (such as the "Previous Versions" tab in some Windows releases).
A redundant pair of ARXes, called an ARX cluster, can now be used as a backup for an ARX cluster at a primary site. This feature is designed for sites where the back-end file servers independently synchronize data between the two sites. To prepare an ARX cluster for a disaster, you can copy the ARX's global configuration to its backup cluster on a regular schedule, so that the backup cluster always has an updated copy of the active cluster's configuration. In the event of a disaster at the active site, an administrator at the backup site can load and activate the configuration there. ARX clients can then connect to their services and storage at the backup site. The client-side names of all services and shares remain consistent after the site failover.
You can also fail over individual services from one site to another, so that some services run on one ARX cluster and other services run on the second cluster.
ARX administrators can now use their Windows credentials to log into the CLI or GUI. This requires some minimal configuration to provide one or more Windows groups (such as "Domain Admins") with administrative privileges on the ARX, and to allow Active-Directory authentication into various access points (such as SSH for the CLI or HTTPS for the GUI). Use the authentication CLI command to allow AD authentication, and use the group CLI command to start provisioning a Windows group for ARX administration. For detailed instructions on these commands, refer to the CLI Reference.
This release also contains the auto-diagnostics feature, which can regularly collect usage statistics from your ARX and send them (through email, encrypted) to F5 Support. The engineers at F5 Support can analyze this data over time, watching for trends and, if necessary, contacting you about preventative actions. The feature is recommended, but it is optional; you can use the gbl-mode auto-diagnostics command (or its GUI equivalent) to activate the feature.
The collect state command (and its GUI equivalent) now includes important log files to aid with problem diagnosis, but still produces a zip file that is small enough to be portable. The collect state file requires a much shorter upload time than the zip file from collect all or collect diag-info. You can use collect state for an initial diagnosis, and possibly follow up with a later collect all if further diagnosis is required.
Release 5.2.0 supports a SOAP-based API for monitoring ARX configuration and file changes in its managed volumes. You can use third-party software that acts as a client for this API.
Release 5.2.0 included the following fixes, which are also included in this release:
The problem had been caused by client PIDs being sent to the filer with values of 0 rather than with their correct values.
A customer requested we add Sync Share functionality to the ARX Manager. You can locate this functionality now in the ARX Manager on the Managed Volumes Details Share Tab, with the Sync Shares button.
Automatic Volume Sizing (AVS) was running too slow. When Automatic Volume Sizing was disabled, the AVS_NEAR_MAX_CAPACITY trap would trigger too early on small capacity volumes. It now traps when there is either 10 percent or 2M files free in the volume, whichever is smaller
The ARX lost contact with a back-end CIFS file server and when the ARX lost contact with the file server, it locked out all user access to all volumes/shares. This issue was fixed by adjusting the fault injection timeout path and updating the setup script to create shares more quickly.
Constrained Delegation on the ARX did not function for multiple-tier authentications where an SFTP server was tier 1. That is, if an SFTP server delegated to the ARX-CIFS service, and the ARX-CIFS service in turn delegated to a CIFS server behind it, the SFTP clients could not get directory listings.
If you have a router between the metadata file and the ARX, its possible that the ARX will receive an an ICMP unreachable from the router causing a service interruption(DNAS crashing and restarting). The ARX now handles ICMP unreachable from NFS metadata filers or routers in between.
By default, Windows 2008 R2 servers require 128-bit encryption for their NTLM SSP sessions. This encryption was not supported for filers behind the ARX. Now the ARX supports 128-bit encryption for NTLM SSP sessions.
Samba-based filers are implemented without "Domain Admin" privilege configured within the proxy-user. The samba server privilege probe now verifies that the proxy user is properly mapped to root.
The source IP in a message should be the ARX proxy-IP address, but for some reason, it's was the file server's IP. The fix was to correctly log the ARX proxy-IP address as the source IP address in the log message (instead of whatever information was left over in the buffer).
The first issue is that it is possible to re-scan directories that make the files scanned counter larger than the total number of files in the volume. If that happens, we no longer display the % complete. The second issue is that the queue counters (first time migrates and the re queued migrates) used to get out of sync, causing the first time migrates to become 4294967295, this no longer happens.
The ARX previously considered a domain controller (DC) in another Active-Directory forest to be "unusable" if it had a particular security setting. (This status appeared in the output of the show active-directory status CLI command, or its GUI equivalent.) The DC security setting was "Minimum session security for NTLM SSP based (including secure RPC) clients", set to require 128 bit encryption. Now, you can enable this security setting on a Windows 2003+ DC in another forest and the ARX recognizes it as useable.
After issuing a reset, the box took too long to respond to pings. The fix was to make the raid status reporting software do the right thing when one of the physical raid drives is either physically missing or behaving as if it is physically missing.
Due to an atypical configuration, a customer experienced a dual NSM core when restarting the ARX Manager. The fix consisted of cleaning up state when a user removes a file server whose connection is down. Previously, this left over state prevented a user from reusing the IP address.
A policy appeared to be aborted. The code has been fixed so that we will not exempt internal admin client connections from tearing down file server connections/sessions. As long as policy releases all the connections, the file server sessions should get refreshed and any credential change in the back end should take effect. In addition, there is a new option to terminate all outbound file server connections used by the internal admin service.
The ARX filer-subshares feature sometimes creates shares on back-end file servers that begin with the prefix "_acopia_". Such share names have a well-defined syntax, and the ARX assumes that it is free to manipulate these shares. With this fix, the ARX is now more robust in its handling of "_acopia_" shares that do not follow the correct syntax (for example, shares created manually by a curious and inquisitive admin user). F5 strongly discourages manual creation of back-end shares whose names begin with the string "_acopia_".
In some cases, a busy file server and a busy ARX can cause the login process from the ARX to the file server to be altered - changing the timing of messages. In some rare cases, when authorization fails on the first attempt to the file server, the NSM that initiated the login would crash. This condition is now handled and the second authorization attempt will proceed normally.
There were issues found with parent rename and directory renaming. Windows file servers will not allow the manipulation of directories that look like DOS device names (for example: COM1, LPT1, AUX). Unfortunately, Windows doesn't prevent the creation of such directory names. The ARX has to prevent the creation of directory names that would cause offense on file severs that won't accept them. These restrictions are enforced dynamically, and only happen if there are Windows filers in the volume. If a pathname includes a DOS device name as a pathname component, files/directories underneath that directory cannot be placed or migrated to a Windows file server. Also, a rename that would create such a pathname on a Windows file server is not allowed. Additionally, if such a name is discovered to exist on a Windows filer, the ARX will not import it. sync/import/inconsistency reports note the reserved names when they are encountered. Client-initiated proxy operations fail with ACCESS_DENIED and a log message. The corrective action is to rename the directory to something acceptable.
The number of supported attach points was not clearly define or enforced. We now count and enforce the maximum number of attach points that are allowed to be configured. Second, we now restrict the length of the front-end and back-end paths. This reduces memory consumption.
A customer was reloading the primary switch in an HA pair to recover from an NSM core. During the shutdown, the secondary switch lost access to the quorum disk. This caused the secondary switch to reboot per design because it cannot access the quorum and does not have heartbeat from the primary. The code has been fixed to keep the switch from losing access to the quorum disk.
When a share was enabled for the first time, from the GUI or CLI, and that share had no free space, it used to return an error that needed to be either more clear or changed entirely. This error message has been updated and known results of the write test be returned to the user.
A customer experienced a failover event that took 15 minutes for the VIPs on switch A to become accessible. Whereas the two VIPs on switch B are accessible instantly. This was due to a MAC addressing issue and this issue has been fixed.
Some clients who ran Excel macros at the end of the day said that the macros took a lot longer to complete than expected. We have fixed the software to no longer require OMDB access and no longer require a listener.
We have enhanced the software to help define disaster recovery terms more clearly. We removed the force argument from the remove-share nomigrate command. We now say remove-share offline instead of remove-share nomigrate force. For the global namespace volume share no filer command the force argument has been changed to offline. And lastly, for the global namespace volume, the optional force argument was also changed to offline.
On a customer cluster, the ARX Manager is very slow to respond to listing request. We have restructured the way we store the policy configuration information in the ARX so that we can more efficiently retrieve the information when the CLI or GUI request it.
The ARX did not allow RADIUS authentication of users when RADIUS network device filters were turned on. We made fixes to the radius client code on the switch and the NAS-IP-Address is now returned.
Occasionally, a shadow copy had failed during target volume rebuild. We change the way that the shadow volume receiver decides database rebuild by making the shadow volume receiver less sensitive to a single not found error.
Polling to a configured time server would continuously fail transmit and/or receive if it was configured on the ARX prior to an ARX network interface change. This sometimes happen when executing a saved running-config script, where numerous configuration commands are executed in rapid succession. The ntpd daemon was modified to automatically reconfigure its association with a time server after a maximum number of transmit/receive failures.
When a back-end share filled up behind a managed volume, and one of its CIFS clients attempted to change permissions on one of that share's directories, all of that managed volume's CIFS sessions hung. An administrator needed to increase the size of the share (or, in some cases, the NetApp quota) to restore CIFS service to the managed volume.
The ARX now detects disk-full conditions on its back-end filers, blocks client operations that could result in a widespread loss of service, and allows CIFS clients to resolve the problem by removing files. The ARX also sends SNMP traps to alert you of any directories or shares that are adversely affected by an out-of-space share. (See the SNMP Reference for full documentation on all SNMP traps.)
When an SNMP trap raises an alarm condition, the alarm appears in the output of the show health CLI command until it is cleared by another trap. In previous releases, there was no manual method for clearing these alarms. Release 5.2.0 introduced the clear health CLI command for this purpose.
The output of the collect state CLI command now includes syslog, procdat, and traplog output. For more details, see More Logs in "collect state" Payload.
A background process configures all attach points (in presentation, or direct, volumes) after a failover. In a system with a large number of attach points, this sometimes required several minutes. If another failover occurred before the process was finished, that failover sometimes required nearly one hour to complete. This fix dramatically increases the speed of attach-point configuration and failovers: the same configuration now takes approximately 1 minute.
This was a Maintenance Release for the 5.01.nnn series of software releases. It did not include any new features or enhancements. It contained the following fixes:
A problem that caused storage jobs to pause indefinitely following the execution of a cancel import command has been fixed. The problem was caused by an internal import lock that was not being released at the appropriate time.
Executing the nsck destage or nsck rebuild command now lists the affected services much more rapidly than before. Previously, it took a very long time for these commands to list all of the affected services before returning a prompt for the user to confirm the action.
When a managed volume imported its shares using NFSv2, some files and directories were occasionally missed. The import succeeded, but the missed files/directories appeared as inconsistencies. Now an NFSv2 import captures all back-end files reliably.
Now the ARX takes snapshots of the correct (physical) NetApp path.
348799, 39560, 348979
Previously, when an NTLM server was removed while thousands of NAT rule actions were installed and in the initial state, the NSM watchdog could expire because the removal of the rules took longer than the watchdog allowed. Now, the watchdog continues to be serviced while the removal of NAT rule actions completes.
A rare race condition between a CIFS-client disconnect and an internal cancel- search command could potentially cause a software failure. The software failure produced a core-memory file. The race condition has been corrected.
Many CIFS clients were unable to connect to ARX storage during failovers and failbacks between NSM cores. The client outages sometimes lasted for several minutes. This fix addresses issues in the control plane that contributed to the outages.
A problem in the NSMs memory allocation could cause the NSM to be shut down by the control plane. This has been fixed by ensuring that the internal watchdog is serviced in the memory heap management routines.
347298, 347192, 348800, 350191, 350964
The Network Services Module could generate core files when it receives multiple client requests that need to be sent to a filer, but the authorization credentials to that filer generate a failure (for example, a LOGON_FAILURE). This issue has been corrected.
Some operations (in particular metadata migration) failed to use the configured SPN as they should have. This prevented the operations from working when using a clustered Windows file server. This has been fixed to use the configured SPN.
When a presentation (or direct) volume has an attach point into a managed volume with snapshots, the "~snapshot" directory was not always visible in the attach point. This only occurred if the attach point attached to one of the managed volume's subdirectories. For example, suppose a presentation volume, "/pvol", attaches one of its directories ("/pvol/attachDir") to a managed volume, "/mvol", at "/mvol/dir1/dir2." A client's directory listing of "/pvol/attachDir" did not include a "~snapshots" directory before this fix; now it does.
When a file migrates to the ARX Cloud Extender (ARX-CE), the ARX-CE compresses the file and sets its "sparse file" attribute. Formerly, a file that migrated to the ARX-CE, then to an EMC Celerra, and back to the ARX-CE would be corrupted on the final migration. This is a file-server issue, caused by different uses of the "sparse file" attribute. Now the ARX works around this issue, so this migration path does not corrupt files.
Release 5.1.7 included the following fixes, also included in this release:
In environments with Active Directory contents comprising hundreds of thousands of user accounts, the v5.1.7 HF2 Secure Agent sometimes failed to scan the entire user database, resulting in some users failing authentication. This has been addressed by changing the Secure Agent's default timeout to 60 minutes, allowing more time for the user database to be read.
A customer experienced a problem after performing a "shutdown" as part of DR test. When the box was powered back up, there was an error with the object manager database (which was then renamed), and the box came up without a configuration. This problem has been fixed.
A customer's performance data showed that the find_first2 requests seem to be up to 40X slower than the file server response time. The software has been fixed and performance has been significantly improved.
An NSM crash could happen in cases where there are connectivity/networking issues with a domain controller. While waiting for an authorization response from the control plane, the back-end file sever could tear down the TCP connection, causing memory corruption when the domain controller finally responds. Now, the NSM detects and corrects this situation to properly return an error to the client.
Under load, a CIFS client may issue multiple close requests to close a file (since there can be a delay in the response to the first close). When this happens, it was possible for the NSM to crash upon receipt of the response to the second close request since the NSM would access a freed object. This situation has been fixed to account for this possibility.
A customer experienced an NSM core. We fixed the assumption that the transaction2 operation responses always come from the file server. Transaction2 operation responses also come from the control plane. This log parameter mismatch was fixed.
338421, 38914, 39044, 337650, 339138
Under heavy load, it was possible for the NSM periodic cleanup functions to be starved and not run to completion and return filer connections that have gone down to the free pool. This could cause the NSM to run out of connections and be unable to service more clients. The periodic cleanup functions were changed so that this situation no longer happens.
There was a corner case where DNAS and the NSM lost communications of the DT message path and the NSM began to reconnect. In the meantime, the NSM was still processing NFS requests and attempted to send a lookup to DNAS, which failed. As part of the cleanup processing there was an attempt to release a DT message that was never allocated. This resulted in the NSM coring. We now check that the DT message path is in the connected stated before attempting release any DT resources.
When an NFSv2 client used the chmod, chown, or chgrp command on an ARX-volume directory, future ls -l commands erroneously showed a time stamp of December 31, 1969. The time-stamp display issue has been corrected in this release.
Shortly after a failover, the ls command hung in one of the ARX's NFS exports. This was due to a rare race condition in one of the network processors. The race condition is resolved in this release.
A customer experienced a core because a file request timed-out. Now we do an extra look up for the object and rely on the file server response and not the object for the valid response.
A customer experienced a frequent pop up message concerning ARX Secure Agent. This was caused by a race-condition in the code between use of an object and freeing the object and this conflict has been fixed.
Database (DB) access through an ARX-CIFS volume failed when several DB-access commands were run in a macro. The ARX volume sometimes returned a "Disk or Network Error" message to the CIFS client running the macro.
An NSM crash could occur during network outages - specifically when back-end file servers tear down TCP connections to the ARX. This was fixed by more properly handling internal objects when this happens and not cause memory corruption in the NSM.
Previously, the client never cached any directory information, so it was forced to always run ls to ask the ARX for all the information. Using ls would always appear to be uniformly slow. A user would not see this with NFS clients & servers, since there is always some locality of reference. Now, repeated references to the same directories can be satisfied (almost) completely from the client's local directory cache, so it is much faster.
In NFS, a duplicate request sent by a client can, in very rare cases, receive two different replies from the file server: the first response yielding an error and the second response yielding a success. The NSM did not deal with this sequence properly (the verifiers did not match), which caused a crash. The ARX now handles this situation properly.
During an upgrade, a virtual server hung in the starting state until browsing was disabled and then the global server went to enabled. Browsing could then be enabled again without incident. Now the volumes are enabled and assigned to an instance, which fixes this problem.
When trying to setup a trust between two domains, a customer got errors back that one of the DCs are offline, while the AD status shows them online. The ARX now checks the forest function level from any reachable DC and it only checks in the forest-trust configuration (Windows Active Directory ensures that only Windows 2003/Windows 2008 servers are used as DCs, checks to see if the forest function level is 2003 or above, and the forest function level can NOT be downgraded.
During a forest-to-forest trust configuration the Win2K8 R2 DC was not accepted. This is now fixed, but you must not configure the forest trust, but instead configure the kerberos-auto-realm-transversal.
336269, 35475, 342748
An NSM core happened due to a double free of a presto packet in an NFS path coupled with a network outage,which resulted in a buffer corruption. The NSM can now send traffic to a file server in this situation without corrupting the buffer.
Two new CIFS shares were added, one to the /data volume and one to the /data directory, which resulted in a core. The boxes ping-ponged until they reached the bounce limit. This was caused by a piece of code that was used to protect us from a volume name of "/". We now double check for the volume name and avoid this issue.
Release 5.1.5 included the following fixes, also included in this release:
Previously, when the "count" form of "ip proxy-address" was used after the hardware was set up, then the resultant EIP records were delivered in non-ascending order of EIP address. If the EIPs are processed out of order, then the LIPs for XIPLIP assignments were also out of order (and incorrect). The code has been modified to so that any out-of-order processing of EIP listener events won't cause any problems.
Previously, when the Shadow Copy Rule Edit page was accessed, the Publishing Mode drop-down was set to "Individual" regardless of the active shadow copy rule configuration on the ARX. The Publishing Mode drop-down box now sticks and matches the global-config.
The Prune Target check box in the GUI's Edit Shadow-Copy Rule screen was persistently checked. If the prune-target feature was disabled previously, either from the GUI or the CLI, the GUI always displayed it as "enabled."
The SNMP trap, dnsServerOffline, did not include the failed server's IP address in its message text. The CLI show health command, which shows only the name and the message text of each active trap, therefore did not show which DNS server had failed.
When calculating internal mappings for a subshare in a particular CIFS service to back-end shares in a volume, the ARX is no longer confused by like-named subshares that point to other volumes.
The reporting of no modify import for directories that are DFS links was incorrect. Now, the report should list an error of 'DF' which indicates that a DFS was found. In addition, the report should lists the directories that failed.
If an import of a share with no-modify enabled resulted in a case-blind directory name collision the process failed with a -70 internal error. A -70 internal error will no longer be returned for import reports or in syslog.
EDAC i5000 MC0: NON-FATAL ERRORS Found!!! 1st NON-FATAL Err Reg= 0x80000
EDAC i5000: THERMAL Error, bits= 0x80000
The ARX kernel now logs an appropriate number of these messages whenever the issue occurs.
The ARX code spuriously declared its Domain Controllers (DCs) to be "slow" if its DNS server required more than 2 seconds to respond. Most of the DNS queries from the ARX were unnecessary, and this caused the ARX-Kerberos processes to repeatedly switch from one redundant DC to another. The ARX Kerberos processes no-longer send out unnecessary DNS queries.
The domain-join operation resulted in an ADJOIN_PWCHANGE error for some network configurations. This occurred when an external routing device allowed traffic from the ARX's proxy-IP addresses to the domain controllers (DCs), but dropped traffic from the ARX's management-IP address. An internal ARX issue caused the ARX to incorrectly send some domain-join packets from the management-IP address instead of a proxy-IP address. As of this release, the ARX sends all of its domain-join packets from its proxy-IP addresses.
Disk drive numbering in the ARX-2000 Installation Guide has been corrected to identify Bay 1 as the top drive and Bay 2 as the bottom drive. This numbering matches the CLI drive designation and the labeling on the switch. The LED documentation has been updated to indicate that flicker indicates drive activity.
If a volume had a connection failure with its CIFS-metadata share, and then someone attempted a metadata migration before any client accessed the volume, the metadata migration failed with a VOL_MDMIGRATE_FILER_PROBE_FAILED error.
A drop-down menu was malfunctioning in the GUI. The malfunctioning menu was at the following path: Policy -> Snapshots -> rule-name -> Rule. Whenever you used the drop-down menu, it reverted back to the first snapshot rule in the volume instead of the selected rule. Now it stays set to the selected rule.
When importing a volume in no-modify mode with multiple shares that are expected to merge, and you have "no import sync" and "no import rename-dir," and when the import hits a Case Collision or potentially some other variables, instead of continuing and doing a delayed fail as it should, the import fails immediately. There are two solutions. If you have run an import with "no modify", "no import rename-directory" and "no import sync-attributes", then with an attributes collision during import the report will show the directory which has attribute condition is marked as "skipped." If you do an import with "no modify", "no import rename-directory", with "import sync-attributes", then if you have an attributes collision it will log the inconsistency, stripe the directory, and continue to descend into that directory.
If you ran the show policy details on a volume with a rule that never ran before, the show operation reported an XSL transformation error. This error also appeared for collect diag-info, which invoked the show policy details command.
A file-placement rule failed at volume-scan time whenever a higher-priority rule with no volume-scan matched any the same files or directories. A no volume-scan rule can now co-exist with any other rule, without causing the other rule(s) to fail.
After removing an active-directory alias that was never accepted by any DC, the ARX software did not send an spnAliasUpdateClear trap. This left a persistent alarm condition, spnAliasUpdateRaise, on the ARX.
The no active-directory alias spn CLI command did not remove the SPN from the ARX database until the local DC removed it from the Active Directory DB. Now the ARX DB removes the SPN immediately, and the ARX software continues a background process to delete it from the Active Directory.
SnapshotOp::setupSnapshotCreateGroup: Waiting for 'n' - 'm' more ckpt config records.
The ARX software no longer generates this log message.
bdb_get_metadata_size(): cifs_shim_fstat on fd=134217738 failed [-1].
This message has been suppressed in favor of clearer log messages.
If a client renamed a file to a similar name during a managed-volume import, such as "file.doc" to "FILE.doc," the import could hang indefinitely. This also caused hangs for client operations during the import. Renames no longer cause these import issues.
The show statistics cifs-auth command had incomplete statistics for unsupported protocols. If a client attempted to authenticate with an unsupported CIFS protocol, the resulting failure was not counted in the main output of show statistics cifs-auth. Now it is.
The ARX's Network Services Module (NSM) terminated abruptly if a filer responded with a different SMB command code than the one that the ARX had sent. The ARX now compares the filer reply with the request and drops the reply in the event that they do not match.
On the ARX-500, when an IP address was added to an interface that was on VLAN 1, the IP address could not be removed completely, because it remained in the ARP cache. VLAN 1 was being re-mapped to the primary interface when the IP address was added to the interface. The functionality for removing IP addresses was changed to correct this.
Release 5.1.0 included the following fixes and enhancements, also included in this release:
Release 5.1.0 added the following new features to the ARX:
Release 5.1.0 supports the new ARX-2000 hardware platform, which is a 2U device with 12 1Gbps interfaces.
A Multi-Protocol Volume Supports NFS Symbolic Links (Symlinks) for its CIFS Clients
As of Release 5.1.0, a multi-protocol (NFS and CIFS) volume can display NFS symlinks to its CIFS clients, and allow its CIFS clients to traverse those symlinks. For example, if an NFS client creates a symlink named "pointerDir" that points to "randomDir," any CIFS client can cd to the "pointerDir" symlink to access the "randomDir" directory.
This feature does not support absolute symlinks (such as a link to "/vol/vol2/myDir"). It supports relative symlinks, such as a link to "../myDir" from the current directory.
Limiting CIFS Connections To Tier 2 Filer Servers
Some Tier-2 file servers cannot tolerate a large number of simultaneous CIFS connections. Release 5.1.0 accommodated those file servers with a feature that allows you to set a maximum number of CIFS connections to such a filer. You can use a CLI command, cifs connection-limit, or its GUI equivalent to set this maximum.
The policy engine offers a number of enhancements as of Release 5.1.0, including the following.
Release 5.1.0 introduced the import priority command to make a managed volume's file and directory mastership deterministic. A master directory is a directory in a managed volume that has duplicates on multiple back-end shares; one share has the master instance of the directory and the other shares have stripes with the same name, permissions, ACLs, and other attributes. A master file keeps its name, whereas matching files on other shares must change their names. You can use the new import priority command to set some shares to priority 1, so that they win mastership for all of their files and directories. This mastership is deterministic; higher-priority shares win mastership on every import and re-import.
Together with Seamless Import, which imports multiple shares while allowing full client access, this feature is a stepping stone toward a full DR solution. An import at Site A can now yield the same file/directory mastership as an import of the same data at DR Site B.
Release 5.1.0 added the following fixes to the ARX:
The CLI show exports command is intended to examine the shares on a filer or server before you define the server in the ARX database. However, the ARX requires a Service-Principal Name (SPN) to examine a Windows 2008 cluster, and the show exports command did not support an SPN option. Now it does.
An anti-virus (AV) scanner on a DC can potentially block the ARX Secure Agent (ASA) installation. Now a pop-up appears during the installation, prompting the installer to re-configure AV scans as needed.
Previously, there were issues with how objects were handled by Active Directory. An optimization in the code path was made to improve the way internal communication is done between Secure Agent components.
37121, 37945, 38883
Previously, in the presence of authentication failures (typically due to mis-configuration), the NSM would crash while attempting to properly logoff in-progress CIFS sessions on a file server.
In a redundant pair where one ARX is upgraded to 5.1.0 and its peer is manufactured with 5.1.0, an administrator experienced a delay in logging in after a reboot. A login was not possible until the ARX reached global scope (that is, until it was possible to enter gbl mode in the CLI).
A problem that caused an outage during the import of shares has been fixed. Now, if a higher priority share fails to import its root directory, any lower priority shares will fail to import as well.
A particular race condition during a managed-volume import could trigger an unnecessary auto-sync operation. The race condition occurred when one ARX client attempted to remove a file while another attempted to rename it. The auto-sync operation, designed to refresh a volume's metadata after import, had no effect.
The ARX policy engine never recognized that a previously-full target share now had free space. If a placement rule's target share filled to capacity, the rule never resumed after someone added free space.
In the CLI and in the ARX Manager GUI, the collect operation failed whenever you attempted an NFS-copy to a multi-protocol volume. Now, you can use both NFS and CIFS to send a collect file to one of the ARX's multi-protocol volumes.
The copy nfs|cifs operation, which copies files to an ARX volume, is not supported from the backup ARX. Former releases did not include an explicit error message to explain this; an error message in the current release explains the issue clearly.
An internal metadata inconsistency caused a share removal to fail. (From the CLI, you can use remove-share migrate and similar commands to remove a share from a managed volume.) Managed-volume software can now successfully remove a share with these inconsistencies.
When a managed-volume import failed due to a slow metadata share, there were no syslog messages indicating the cause of the failure. Now, syslog messages appear to describe the problem with the metadata share, and to associate the metadata-share issue with the failing import.
The show snmp-server command displayed no output unless there was at least one host to receive SNMP traps. (The snmp-server host command adds a host to receive traps.) Now the command displays the current SNMP configuration under any circumstances.
Reboot required after running config applied in order to get NTP to work. This issue has been fixed by by moving ntp server config to the end in running-config, so that ntpd starts polling the ntp server after running-config is done, without additional reboot or reset ntp server.
A CIFS client could not traverse an NFS symlink to a directory. Release 5.1.0 introduced CIFS-Symlink Support to address this issue.
Forest to Forest Trust did not work with selective authentication. Now there is a CLI command, kerberos auto-realm-traversal, that can configure the ARX to function with selective authentication.
NTLM authentication server incorrectly shows offline if the IP address cannot be resolved by the ARX. This was fixed by adding 60 seconds before starting NTLM Secure-Agent monitoring during system startup.
Kerberos clients were unable to connect after an update to 4.0.1. This issue is fixed and the ARX no longer advertises NTLMSSP in Kerberos namespaces unless they also have NTLM[v2] or else have anon-access enabled
Full tree walks were happening after database rebuild, which was caused by the lack of synchronization in the shadow receiver. This issue was fixed by enabling the path lock at the right time and on right paths.
The Remove Share report did not indicate shares that had an "access denied" problem. The software now indicates in the remove report whether the error came from the share being removed or he relocate-dirs share.
GUI: Added new status icons to the Exports page. These now include all of the following: Offline (red star), Degraded (yellow triangle), Online (green circle),Read Only (yellow triangle), Not Found (red star), Unavailable (red Star), and Snapshot (green circle).
This was a Maintenance Release for the 5.00.nnn series of software releases. It did not include any new features or enhancements beyond those of Release 5.0.5. It contained the following fixes:
A bug in the NFSv3 TCP proxy code caused a buffer overflow, which resulted in memory corruption and crashes when a 64K WRITE was forwarded to the control plane by the NSM. This forwarding happens only when a client sends a WRITE request with a stale file handle as all other WRITEs are handled completely in the NSM. This has been fixed.
This was a Maintenance Release for the 5.00.nnn series of software releases. It did not include any new features or enhancements beyond those of Release 5.0.5. It contained the following fixes:
An NFS service occasionally created a very large database file, and that file caused reboots to take a very long time. The file grew at a fast rate for NFS clients that mounted, unmounted, and remounted the NFS service at a high frequency. Now, the database file grows at a slower rate for constant NFS mount and unmount operations.
The file tracking archive behavior was holding database transactions open for too long. We now we use a separate transaction for each rule. This limits the time a transaction is open to one set of archive operations.
38544, 38346, 38587, 38588
An NSM crash could happen in cases where there are connectivity/networking issues with a domain controller. While waiting for an authorization response from the control plane, the back-end file server could tear down the TCP connection, causing memory corruption when the domain controller finally responds. The NSM now detects and corrects this situation to properly return an error to the client.
An NFS volume sometimes encountered an error, NSM_PRESTO_PKT_MUTEX_ERROR, when a file-history-archive rule took snapshots on back-end filers. This stopped all NFS access to the volume, and required a restart of its front-end NFS service(s).
The snapshot-create reports from a file-history-archive rule contained incorrect file-server information for the metadata share. The incorrect information was the filer name and NFS-export name.
When shares in a presentation volume have multiple attach points to managed volumes, the 'show host... path' used to display wrong path information. The fix was to use additional qualifier for the attach name when querying the OMDB for path information.
If the Active-Directory (AD) forests were configured manually on the ARX, it was possible to create a CIFS-access problem. The problem was that CIFS clients from trusted domains could not connect to front-end CIFS shares on the ARX.
If a CIFS-client application sends a packet with a pass-through Information Level, the ARX (which does not support pass-through levels) should reject it with a STATUS_INVALID_PARAMETER response. Before this fix, it was incorrectly responding with STATUS_SUCCESS. This created unpredictable results for the client application.
The capture session operation unnecessarily duplicates internal TCP-ACK packets, and continues to duplicate them after you stop the capture session. (This duplication does not occur for an ARX-500 chassis, or for any capture session that captures packets to/from all proxy-IP addresses.) Now the no capture session operation stops the internal packet duplication.
A snapshot remove operation for a particular back-end share would always time out after 50 seconds. This was insufficient for some back-end servers. After your ARX gets the fix for this issue, F5 Support can set a higher timeout for snapshot-related commands if required for your site.
When a shadow-copy operation copied a file over 4 Gigabytes, it occasionally failed and produced a large core-memory file. This issue was related to issue 35679.
Release 5.0.5 included the following fixes and enhancements:
Release 5.0.5 is functionally equivalent to Release 5.0.1.
ARX Release 5.0.5 is a maintenance update that provides support for new ARX-4000 hardware; specifically a new control plane with new power supplies.
You can identify whether or not you have the new hardware by a physical examination. The original version of the ARX-4000 used a control plane containing six 3 1/2 inch disk drives. (The serial numbers of these commodity servers start with BZDS.) The new ARX-4000 uses a control plane that contains two 2 1/2 inch disk drives. (The serial numbers of the new chassis start with 0700.)
If your installation has upgraded existing ARX-4000 systems instead of upgrading to the new platform, the ARX-4000 documentation for 5.0.5 contains some information that does not apply to your model. For former versions of the ARX-4000 chassis, consult:
These are included in your 5.0.5 release; you can retrieve these earlier versions from the GUI or download them from the CLI.
Release 5.0.5 added the following fixes to the ARX:
Previously, deleting a report would only unlink the report name from the file system. The disk space for the report file would only be freed when all references to that report were removed (unlinked). Other references to a report could include being opened for copying or collection, and so on.
Now, when a report is deleted, it is first truncated meaning that the report is terminated. There can be no remaining references to the report. After that, when the report file is removed, not only will its name be removed from the file system but its disk space will be freed immediately. Therefore, there can now be no discrepancy between the amount of disk space that is reported for /acopia/reports before and after the switch is reloaded.
The ARX Manager can take a null pointer exception while editing an Export due to Back button use. Do not use the Back button in releases prior to 5.0.5. You can use the back button in 5.0.5 and higher.
Previously, the maximum snmp-server entry limit was being checked prior to adding and deleting an entry. If the maximum snmp-server entry limit had already been reached, the operation failed. The fix was to only check the limit when adding a new entry.
It is difficult to detect a power supply fan failure on the new ARX-4000 control plane. The control plane power supply LEDs do not change color or indicate failure in any way that you can detect visually. However, if you think a fan failure has occurred, you can inspect each power supply fan to determine if the fan is dead and to detect air movement (or the lack of air movement).
If you have access to the CLI, enter the show chassis chassinfo command which shows the status of all 4 power supplies. It is best not to rely on the LEDs because the LED states are different for each power supply manufacturer.
Prior to 5.0.5, when facing the back of the ARX-4000, the control plane power supplies were designated 1/1 (top) and 1/2 (bottom). The data plane power supplies were designated left-to-right as 2/2 and 2/1, respectively.
Starting with 5.0.5, the ARX-4000 includes a new control plane (with new power supplies) and a re-numbering of the data plane power supplies. When facing the back of the box, the control plane power supplies are designated left-to-right as 1/1 and 1/2, respectively. The data plane power supplies are designated left-to-right as 2/1 and 2/2, respectively.
When upgrading an existing ARX-4000 to 5.0.5, take note of these changes. If you experience a data plane power supply failure and consult the output of the show chassis chassinfo, it reflects the new designations. For example, the following output indicates a failure of the left-hand data plane power supply.
bstnA# show chassis chassinfo
Chassis Type Model Number HW Ver. Serial
------------ ------------------------------------ ------- -------------
ARX-4000 SR2500ALLX-F5 0700000006
Base MAC Address Power Fan(setting) Temperature
----------------- -------------- ------------- -------------
00:0a:49:17:78:00 Online Online Normal(<62 C)
Release 5.0.1 included the following fixes and enhancements, also included in this release.
Release 5.0.1 is functionally equivalent to Release 5.0.0.
Release 5.0.1 adds the following fixes to the ARX:
If a managed volume already imported a share from an NTFS qtree, it was unable to import another share from an NTFS qtree with the "ntfs_ignore_unix_security_ops" option. The new share stayed indefinitely in the "Pending Import" state. This only occurred if the first share was imported before an upgrade and the remaining shares were imported after an upgrade.
When CIFS clients unexpectedly cancelled their connections in the middle of a "find" operation (such as Transaction2FindFirst), NSM software allocated memory without freeing it. If this happened often enough, the ARX sent nsmResourceThreshold traps for the "cifsSidBitmap" resource. Eventually, some CIFS clients were unable to connect. The problem is resolved in this Release.
A client could send a non-Latin 1 character sequence file name to a Latin 1 namespace during an import. We now restrict and deny non-Latin 1 sequence files during an import to a Latin 1 namespace.
A direct (or presentation) volume could not attach to an NFSv3/UDP export unless the export also supported NFSv2. Direct volumes can now attach to NFSv3/UDP exports whether or not the exports also support NFSv2.
An integer overflow prevented the shadow volume copy from copying files over 4G. In addition, there was the large memory consumption by shadow receiver. A fix was put in place to prevent integer overflow when the file size is over 4G. A throttle was implemented to prevent the potential large memory consumption by the shadow receiver.
The NSM was generating a core when the NSM failed to handle an error reply from a file server for a transaction of snapshot. The issue occurred when multiple transactions were done at the same time while the ARX was waiting for response from the file server, the ARX deleted the cache information incorrectly, then caused an NSM core.
Mac OS X clients using SMB file sharing components that are part of the OS were unable to mount shares hosted on the ARX. This was caused by a crash of the NetAuthAgent component on Mac OS X. ARX software in this release works around this problem.
Starting with Release 5.3.0, the ARX supported licensing of its software on the new ARX-VE. Release 6.0.0+ supports licensing on all ARX platforms.
Before you can use or configure storage services on the ARX, you must activate a valid license. If the ARX has a network connection to the F5 license server at http://activate.f5.com, you can automatically activate the license with the license activate command or its GUI equivalent Otherwise, you can use a manual activation process. Automatic and manual activation are described in the following documents:
Release 6.0.0 introduced an option to place multiple namespaces behind a single VIP; for CIFS namespaces, this places new constraints on the sam-reference file server. When CIFS clients ask for available groups to assign to a given file or directory, they invoke a query to the sam-reference file server. If the VIP hosts a single namespace, the sam-reference file server requires all local groups defined in its that namespace. If the VIP hosts multiple namespaces, the sam-reference server must define all local groups from all file servers behind all of those namespaces.
The CLI command, character-encoding cifs no longer exists as of Release 6.0. It has been superseded by the wins-name-encoding command in gbl-cifs mode. The former command was in gbl-ns mode. If you run an older global-config script with the gbl-ns command, the command does not function.
If you are upgrading from an earlier release, you may require additional configuration changes based on the features you use. The subsections below explain the configuration changes required for various upgrade paths.
Release 5.2.0 introduced constrained delegation for its CIFS services, and we strongly recommend implementing it for your existing CIFS services. A member of the "Domain Admins" group can implement this at a domain controller (DC). Constrained delegation is a more secure method for running your CIFS services than the unconstrained delegation that was previously available on the ARX. Additionally, clients can use NTLM or NTLMv2 to authenticate to a CIFS service without the aid of an ARX Secure Agent. This is not a required change, but it is strongly recommended.
A CIFS service with a long name may need to rejoin its domain after the change to constrained delegation. If the CIFS service has a name longer than 15 bytes, DCs will reject NTLM or NTLMv2 authentications from the service's clients. (The service name is the first part of the service's FQDN; for example, "myco" would be the name of the service at "myco.ourco.com.") The CIFS-service software raises an SNMP trap if it detects this condition. See the SNMP Reference for details on SNMP traps.
To correct this condition, use the domain-join CLI command or its GUI equivalent. This operation truncates the CIFS-service name and creates a new machine account at the DCs with the shorter name. The CLI or GUI displays the shorter name after you invoke the domain-join operation.
Any namespace that supports CIFS access has a proxy user that it uses as its identity. The proxy-user configuration is a username, password, and Windows domain that is valid in your Windows network. The proxy user's Domain should always be an FQDN (such as "mysrvr.myco.com") instead of a short name (such as "mysrvr"). This ensures that the ARX can authenticate with Kerberos, which can be vitally important in some situations. This is required to support Constrained Delegation.
The ARX database now keeps the pre-Windows-2000 names for every domain in its active-directory (AD) forest. If your AD forest has a domain with a pre-Windows-2000 name that is not the first component of the full domain name, you must re-discover the AD forest. For example, if a domain in the forest is named "myco.ourco.com" but its pre-Windows-2000 name is "COMPANY" instead of "myco," the ARX software needs to know that some CIFS clients may use "COMPANY" as their domain name. Otherwise, those clients cannot authenticate. You can use the active-directory update CLI command or its GUI equivalent to re-discover the AD forest, including all pre-Windows-2000 domain names.
This section only applies to installations that upgrade from Release 5.0.6 or earlier. After the upgrade beyond Release 5.1.0, you require the following configuration changes to support all of the release's new features.
The ARX cannot support NTLMv2 until all of its ARX Secure Agents (ASAs) are upgraded beyond 5.1.0, too. After you upgrade the ARX to this release, you must also upgrade at least one ASA. We recommend upgrading all of them. There are two versions of the ASA kit: a 32-bit version and a 64-bit version. Refer to the ARX Secure Agent Installation Guide for detailed ASA-download and upgrade instructions.
NOTE: The ASA formerly used pwdump to access a database on the DC; the 5.1.0 release of the ASA software uses other means instead. Please update any anti-virus (AV) application running on your DCs before you use the new ASA version. Refer to Solution Note 10026 for detailed instructions.
If your system contained any multi-protocol (CIFS and NFS) volumes before the upgrade to this release, the volumes require a configuration change to take advantage of a software feature. The feature is symlink support for CIFS clients, described above. To activate CIFS symlinks for a multi-protocol volume, use the no cifs deny-symlinks CLI command. You can run this command from gbl-ns-vol mode for the multi-protocol volume. Once you allow CIFS symlinks, the volume must scan its back-end servers for NFS symlinks and record them in its metadata. A CLI prompt allows you to run the scan as a background process; enter yes to proceed with the scan.
For example, this command sequence adds CIFS symlinks support to the "insur~/claims" volume. The prompt indicates that a back-end scan is required, and offers the opportunity to run it in the background:
bstnA(gbl)# namespace insur volume /claims
bstnA(gbl-ns-vol[insur~/claims])# no cifs deny-symlinks
This volume's configuration has been upgraded from a prior software release.
If symlinks exist in the volume, the volume's metadata must be synchronized
before CIFS clients can take advantage of this feature. You can synchronize
the metadata at any time. User access is not affected by this process but it
may run for hours or days if the volume contains hundreds of millions of files.
Synchronize the metadata for the '/claims' volume now? [yes/no] yes
To perform the scan (and fully-activate CIFS symlinks) later, you can run the sync files namespace-name volume vol-name command on the volume's namespace. You can run this at any time.
The ARX Manager UI also provides an interface for running the no cifs deny-symlinks and/or the sync files operations.
This operation is not necessary for any multi-protocol volume created after the upgrade to 5.1.0. By default, new volumes allow CIFS clients to use symlinks, and the symlink scan is performed during the initial import of the volume's back-end shares.
If you previously used a Windows 2003 cluster behind a managed volume, you require one of two configurations to continue using the cluster. The first is recommended as a best practice, and the second is for sites where the cluster does not have a shared Service-Principal Name (SPN):
In either case, you can use the show external-filer command to map the Windows 2003 cluster's VIP to an "external filer" name on the ARX. Then use external-filer filer-name to enter the CLI mode for that filer, and then the spn or no spn command as needed.
For example, the following command sequence finds the external-filer name for a Windows 2003 cluster and sets its SPN:
gffstnA# show external-filer
Name IP Address Description
ch-wd-win1 192.168.158.93 Windows Server 1, back room
ch-wd-win2 192.168.158.106 Windows Server 2, cluster next to Win1
ch-wd-nas 192.168.158.94 NAS filer in computer lab
gffstnA(gbl)# external-filer ch-wd-win2
gffstnA(gbl-filer[ch-wd-win2])# spn fs2k8c95@GGH.MEDARCH.ORG
This section only applies to installations that upgrade from Release 5.0.0 or earlier.
Once you have installed the software, you must make the following required configuration change(s).
This section is for administrators who need to upgrade from releases prior to 5.0.0. The 5.0.0 Release includes a new Unicode library that may have an effect on client files and/or directories. The new version of Unicode adds 168 lower-case versions of characters that were uppercase-only in the previous version. The characters derive from the following languages:
After the upgrade to Release 5.0.0, clients cannot open any files or directories with any of these rare characters in their names. This problem should be very rare. The symptoms are different for files than they are for directories, as explained below. If you see these symptoms on any of your files or directories, escalate the problem to F5 Support.
If a Windows client attempts to open a file with one of these characters in its name, an error similar to this appears in Windows Explorer:
Cannot find the \\VIP\unicode/dir1/file%c8%ba.txt
You can resolve this by synchronizing the volume's metadata with the filenames on the filer. From the GUI, go to the Managed Volume Details screen for the volume and click the Sync... button. From the CLI, you can use the sync files command on the ARX volume. This resolves all such file-naming issues in the volume.
Windows Explorer returns the following error if it attempts to open a directory with one of these characters in its name:
Refers to location that is unavailable
You must rebuild the managed volume if it contains such a directory. From the GUI, go to the Managed Volume Details screen for the volume and click the Rebuild... button. From the CLI, use the nsck ... rebuild command.
The following items are known issues in the current release.
This issue was identified during pre-release stress testing.
Workaround: To ensure that passwords are sent in encrypted form, use HTTPS (instead of HTTP) to connect to the ARX API.
The uninstall of the ARX Secure Agent may fail to reboot the DC. (35754)
The uninstall of the Secure Agent must reboot the host machine (typically a DC) to finish. The uninstall process has failed to reboot the host DC on some occasions, but the failure is rare.
Recovery: Manually reboot the DC if the uninstall process fails to reboot it automatically.
LEDs remain on/lit continuously after chassis power switch has been toggled to the off position. (372477)
The Status and Alarm LEDs on the ARX 2500 remain lit continuously even after the unit's power switch has been set to the Off position.
The LEDs remain lit as long as the power cord is plugged in. If the power cord is unplugged after the power switch has been turned off, the LEDs will turn off, and the LEDs will remain off if the power cord is plugged back in subsequently. When the unit is turned on again, the LEDs will update normally, in the expected sequence.
This functionality is controlled by non-F5 component vendor firmware.
LED Status and Trap can take up to ~5 minutes to occur after power supply status change - consider configurable polling interval. (372495)
It can take up to five minutes for a power supply interruption to be indicated by the ARX's Alarm and Status LEDs, or by SNMP trap.
The check for zero-length files is not applied consistently when the collect logs command is executed. (372220)
The results will vary depending on whether a date range is used with the command or not.
The command will not show directories that are empty or that contain only other directories.
VLAN configuration via the CLI behaves inconsistently, depending on the order in which a tag and a member interface are specified for the VLAN.
If an interface is marked to tag a VLAN, and then that same interface is marked as a member of that VLAN, both tag and member appear for that interface in show vlan summary output and in the running-config.
However, if the configuration is performed in the reverse order (member first, then tag), only tag appears in the VLAN summary and the running-config; the member configuration is removed.
The CLI show clock output does not always show the correct time after a time-zone change. (24526)
You can use the clock timezone CLI command to set the time zone of the ARX. On rare occasions, the output from the show clock command does not show the correct time after this change. For example:
ARXa500# clock set 14:43:00 01/11/2007
ARXa500# show clock
Local time: Thu Jan 11 14:43:02 2007 EST -0500 America New_York
Universal time: Thu Jan 11 19:43:02 2007 UTC
ARXa500(cfg)# clock timezone America Denver
ARXa500(cfg)# show clock
Local time: Thu Jan 11 14:43:13 2007 EST -0500 America Denver
Universal time: Thu Jan 11 19:43:13 2007 UTC
The time does not conform to the new time zone, though the correct new time zone (America Denver) does appear in the output.
Workaround: Log out of the CLI and log back in.
During the hour of transition from daylight-savings time to standard time, the clock set CLI command incorrectly interprets times in some time zones. (24709)
Times are ambiguous in the hour when daylight-savings time reverts to standard time, once per year. Suppose the transition occurs at 3 AM on the day of the daylight-savings change: time passes from 3 to 4 AM in daylight-savings time, then the clock goes back to 3 AM for standard time, and then time passes from 3 to 4 AM again. In some time zones, if you reset the clock to a time between 3 and 4 AM, the clock set command may not interpret your time correctly. If this occurs, the ARX assumes that the transition to standard time has already occurred.
This only occurs in time zones that are East of the Prime Meridian, with positive offsets from UTC.
Workaround: Avoid the clock set command during the day and hour of transition.
Config replay fails due to DB schema changes. (27866)
Some upgrades result in database-schema changes. If this upgrade includes a database change, do not use previously-saved configuration scripts because these scripts will not implement the changes properly. Use the copy global-config command to copy (and save) the switch's new global configuration.
See the section, Installing the Software, to determine whether or not this release contains a database change.
Workaround: After upgrading to the new release, use the 'copy global- config' command to copy (and save) the switch's global configuration to a local file, a remote server, or an email recipient.
The CLI displays unintended errors if you interrupt the copy CLI command (with <Ctrl-C>) during the file transfer. (32531)
The CLI copy command prints the following messages while it transfers a large file to or from the ARX:
% INFO: Transferred nnn of total megabytes; still copying . . .
If you press <Ctrl-C> while the CLI is printing these messages, some internal processes continue after the overall copy process halts. After 20-30 seconds, the CLI displays the following errors from those sub-processes:
gunzip: stdin: unexpected end of file acrypt: Error, uncompress failed(256).
When upgrading an ARX 1500 or ARX 2500 to a new software release, volume software may slow down and a metalog latency trap may be raised. (365401)
The trap should clear within minutes of completing the rolling upgrade. If possible, perform a software upgrade only during low-traffic hours. This applies to a stand-alone ARX, an active ARX (in a redundant pair), and a standby ARX.
The collect logs command does not overwrite data to an existing file correctly when placing the output on a remote host via FTP. (372138)
Instead, garbage data is appended to the original data in the existing file.
Load Cfg - conflict resolution doesn't allow remote cluster specific entries. (340060)
Once a configuration is up and running on a switch, loading a config file that has cluster-specific commands for the remote cluster will be ignored due to the conflict resolution rules.
This has a significant effect on failover/failback behavior between ARX clusters.
The published direct share limit of 8192 is unsupportable with current ARX software architecture. (375265)
The ARX license limits for direct shares are too high; configuring an ARX up to the system limit (i.e. direct_shares_per_system) is usually possible, but leaves the ARX in an unusable state, with commands such as 'show namespace' taking a long time to execute. F5 has decided against lowering these license limits for now. it is recommended to limit the number of direct shares per system to around 1024.
An ARX running 5.2.0+ cannot import a CIFS share from an Isilon storage server running a version of OneFS that is older than version 6.5. (341470)
Isilon storage servers were not previously qualified for use behind an ARX volume. However, it was possible to import CIFS shares from Isilon servers before Release 5.2.0. If an Isilon share is already imported before the installation of 5.2.0, it continues to function behind the ARX volume until there is a re-import. On import or re-import of an Isilon share, the import may fail with a CIFS_PRIVCHK_FAIL error. Contact F5 Support to work around this issue.
This failure only occurs in a managed volume with persistent-acls enabled.
This may also be an issue with other unqualified servers whose CIFS services are based on Samba.
UTF-8 Chinese characters are truncated in namespace name. (30941)
If a user enters Chinese characters that exceed the GUI's limit for any input field, the GUI will not issue an error message but instead simply truncates the input.
The GUI input fields limit input based on characters and not bytes. When entering multi-byte characters, the input may be truncated if the total number of bytes representing the characters exceed the internal byte limit.
- the platform is an ARX-1000, ARX-2000, ARX-4000, or ARX-6000,
- a client and/or server subnet is carried on VLAN 1, and
- channels are in use.
This issue is still under close investigation, as a large number of installations meet the above criteria without experiencing this issue. The issue has only occurred at a single site at press time.
Workaround: Move all of your client and server subnets to a VLAN other than VLAN 1. Contact F5 Support if you need assistance with this procedure.
A RAID rebuild never completes for a drive if you remove it and replace it during the rebuild operation (354351)
If you remove an ARX-1500 or ARX-2500 hard disk during a raid rebuild operation, the rebuild may never complete after you replace the disk.
Workaround: Allow a raid rebuild operation to complete before removing the disk that you are rebuilding.
Workaround: Execute shutdown and no shutdown on the interface to bring it back up.
NSCK reports do not identify "marked" multi-protocol directories where you should run a sync files operation. (23891)
Some multi-protocol (NFS and CIFS) directories are "marked" for special processing. These directories contain files and/or subdirectories one of these naming issues:
If a directory is marked with one of these naming issues, the volume performs extra processing whenever a client tries to introduce an entry with the other naming issue. Depending on the outcome of the processing, the new client entry could become NFS-only (inaccessible to CIFS clients). Refer to the CLI Maintenance Guide for details.
Clients can resolve these issues by accessing the volume through its VIP and renaming the directory's entries. However, the directory mark persists after all of its child entries have been correctly renamed; you use the sync files CLI command to remove the mark.
The issue is that there are no reports that identify a directory as "marked" after its entries have been correctly renamed.
Workaround: Use sync files to clear the directory mark immediately after renaming its entries.
You must separately export a CIFS managed volume if you use it as a "managed volume" behind a CIFS direct (presentation) volume. (21231, 24359)
If a CIFS-managed volume is used as a managed volume in a CIFS-presentation volume, its CIFS front-end service must export the managed volume separately. This is in addition to the export for the presentation (or direct) volume. (The same CIFS service must export both volumes.)
Spurious metadata inconsistency in CIFS presentation volumes. (354538)
If you run a metadata-inconsistency report on a CIFS-only presentation (or direct) volume, all of the attach points appear as inconsistencies in the report. (You can invoke the metadata-inconsistency report with the nsck ... report inconsistencies CLI command or its GUI equivalent.).
Need to work around MD inconsistency on storage remove. (377220)
A remove share operation can fail if a directory is missing on the share we're trying to remove, and therefore cannot be promoted to the remaining shares. These errors should not prevent the successful completion of the share removal.
Customer outages caused by adding new IP interfaces to NetApp, Isilon, and Dell-FFS. (341951)
The NetApp filer responds to portmap requests from the ARX with an IP address that is different than that sent by the ARX. The ARX declines the portmap response and the corresponding NFS service goes offline.
Workaround: Add secondary IP addresses to the external filer definition.
Solaris clients hang when issuing an ls on any share within a PNS direct attached volume. (25782)
Some Solaris clients can specify size limitations for NFS RPCs. When this happens and filers respond with RPCs larger than this limitation, the ARX does not "trim" the RPC, resulting in the Solaris client not receiving the RPC response. This happens only on ARX-1500 and ARX-2500 on releases after 6.0.0.
Spurious errors appear in the syslog after an NSM failover. (25782)
NSM processors have redundant peers, even in an ARX that is not configured for overall redundancy. If an NSM processor fails, its peer processes packets for both. If nsm recovery is enabled, the failed processor comes back online and waits to take over for the running processor. The failed processor may repeatedly put the following message in the syslog:
NAT rule TCP/ip-address:port for remote action ip-address-2:port-2 type 3 not found.
This syslog message is spurious.
A shadow-copy rule runs indefinitely (instead of terminating immediately) when the RON connection to the target share fails. (32110)
A shadow-copy rule should fail as soon as the RON connection to the target filer fails. Instead, it continues indefinitely, waiting for the RON connection to return.
- a start time in the 11 O'Clock hour, and
- set to run every Saturday.
The time calculation can fail on the night before the fall DST change. This causes the file-placement rule to run with an invalid time stamp and migrate the wrong files.
The ARX makes its best effort when performing shadow-copy of streams that have invalid names on an EMC filer to a Windows shadow volume. (366633)
As a result, the shadow-copy operation may be unsuccessful.
Incorrect values in snmpwalk output. (376742)
If you take a managed volume offline with nsck ... destage and then remove its shares, the removed shares remain visible in an SNMP walk of the volume. The counters for these non-existent shares never change. The next time the ARX reboots, the removed shares no longer appear in the SNMP view of the volume.
The ARX cannot send email messages through the out-of-band (OOB) management interface. NTP, DNS, RADIUS, and snapshot-management services (SSH and RSH) are also unsupported through the OOB interface. (24595)
All email notifications from the ARX go out through an in-band (VLAN) management interface, configured with the interface vlan CLI command. At least one in-band-management interface must have a route to the email server for email notifications to function. The same applies to NTP, DNS, and RADIUS services, as well as SSH and RSH for managing filer snapshots.
Workaround: Use the cfg-mode ip route command (without the mgmt flag) to add a static IP route to the email server(s), NTP server(s), DNS server(s), and/or RADIUS servers. All filers and file servers must have a route to be useable by the ARX at all, so this is less likely to be an issue for SSH and RSH.
Under very rare circumstances, the ARX may block administrative logins after a reboot. (32537)
An ARX in the F5-Development laboratory did not allow administrative logins after a reboot. Logins to the serial-Console port always timed out after entering the administrative password, and logins to the Out-of-Band Management port (typically labeled "MGMT") were rejected with this error message:
ssh_exchange_identification: Connection closed by remote host
F5 Development has been unable to reproduce this problem, despite hundreds of reboots. We note it here until the problem is proven to be unreproducible at any customer site.
Recovery: Power cycle the ARX.
On the ARX-4000, CoreCollector code has been changed and may display old cores that were never collected and reported. (34722)
The new CoreCollector code now correctly reports all cores from the current release and previous releases. It may find cores that were never collected and reported.
If you replay a 5.2.0+ global-config on a system with an earlier release, domain-join fails for CIFS services. (39767)
The run configs global-config-file CLI command "plays" the global-config-file on the ARX, so that the ARX takes the full storage and policy configuration in that file. Every global-config file contains a special CLI command, kerberos-creds, to keep its CIFS services joined to their Windows Domains during this operation. The 5.2.0 version of this hidden command has changed, so the command fails if played on an ARX running an older release. Therefore, the CIFS services are unjoined to their domains after you use run configs global-config-file.
Workaround: Manually edit the 5.2.0+ global config file so that the kerberos-creds command conforms to earlier releases. Specifically, remove the last three entries from the command. For example, if you find an entry like this in the 5.2.0+ global-config file:
ac1.MEDARCH.ORG MEDARCH.ORG 1Fkua3z04b6mvQrB4K4/+Q== ac1$ HOST/ac1.MEDARCH.ORG 2
You remove the last three entries:
ac1.MEDARCH.ORG MEDARCH.ORG 1Fkua3z04b6mvQrB4K4/+Q==
The above command can play back on an ARX running a pre-5.2.0 release.
ARX-500, ARX-2000, and ARX-4000 do not send 'nsmResourceThreshold' traps until the NSM resource is >100%. (356176)
The ARX monitors its NSM resources and should send an nsmResourceThreshold SNMP trap whenever an NSM process attempts to exceed 80%, 90%, or 100% of any of these resources. The issue is that the ARX does not send any nsmResourceThreshold trap until a process tries to exceed 100% of an NSM resource. It does not send traps when an NSM resource exceeds 80% or 90% of its capacity.
This does not apply to the ARX-VE, ARX-1500, or ARX-2500 platforms.
|F5 Online Knowledge Base:||http://support.f5.com/|
|F5 Services Support Online:||https://websupport.f5.com/|
|F5 Software Trial Support:||https://www.f5.com/trial/secure/support.php/|
For additional information, please visit http://www.f5.com.