The ARX acts as a resource proxy
between the current clients and servers on your network. The switch terminates client requests, determines the correct server to process the request, and then originates a new request to the server. Messages in the reverse direction, from servers to clients, also terminate and restart at the ARX. The clients are said to be at the front end
of the ARX, and the servers are said to be at the back end
. As you plan to add a switch to your network, it is helpful to remember the sharp division between the switchs front-end and back-end processing.
You can configure one or more namespaces
for your front-end clients. Each namespace is a collection of virtual file systems, called volumes
, under a single authentication domain. A volume is a collection of shares (or exports) hosted on the back-end file servers.
shows clients and servers on the same VLAN and subnet before the introduction of the ARX. The router connects the LAN to additional client and server subnets, perhaps on other campuses.
shows clients and servers after cutting in an ARX.
shows clients and servers on separate VLANs and subnets, with a router connecting the two subnets.
As shown in 1.5
, the ARX has a separate connection to the client subnet and the server subnet in a multiple subnet topology. The switch serves as a proxy for CIFS and/or NFS transactions between the clients and servers.
The ARX also has an interface for out-of-band management (typically labeled MGMT
) on its front panel. This interface is designed for installations with discrete management networks. It must have an IP address outside of the server subnet or any of the client subnets.
lists the ports required by the ARX to communicate with its authentication services, back-end storage, and front-end client services. If you plan to operate in a secure environment, these ports must be opened on fire walls in order for the ARX to function properly.
To prepare multiple file server shares for inclusion in the same namespace volume, you should avoid name collisions
. A name collision occurs when two shares contain a file with the same path and name. The collision is resolved by renaming the second file (and all subsequent files with that path and name) before they are imported.
When a namespace imports an NFS export/share, the ARX takes inventory by reading the shares directory tree as root
. The shares cannot squash root access by the ARX devices proxy IPs, or this tree walk (and therefore the import) may fail. Set your NFS shares to no-root-squash
for all of your proxy IPs.
For a list of all proxy IP addresses on the ARX, issue a show ip proxy-addresses
command. Note the addresses that are in use. See the following example.
allow access to these shares from actual clients; changes from other clients would cause confusion for the namespace software. The only exception to this rule would be a management client, which may require access for backups or troubleshooting.
To import a share, navigate to NFS > Manage Exports (i
n the left-hand navigation panel) and double-click the share that you want the ARX to import. 1.7
shows the NetApp Manage Exports screen.
On the EMC Celerra server, select NFS Exports
. See the left-hand navigation column in the following figure.
or click an existing export name, as appropriate.
As root, edit the /etc/exports file to accomplish all of these goals. To allow mounts below the root of the share, you must use the -alldirs
flag. For security reasons, BSD only allows this flag for shares that map to block devices. On the BSD machine, use the df
command for a list of block devices.
Autonomous ARX operations, such as migrating files between back-end CIFS shares, require a Windows user identity so that the ARX can similarly access servers. This identity, called a proxy user
, is a valid user account in the file servers Windows domain. The proxy user requires strong privileges on all CIFS-supporting servers, This user account must belong to the Backup Operators group or a group with equivalent privileges, and it must have full control (defined as both read and change control) over all files and directories in the share.
If your installation supports Per-Seat
licensing, this is not an issue. For Per-Server
licensing, you must configure each back-end server with 32 licenses per namespace.
You can also configure a volume to copy its files to a remote ARX volume, called a shadow volume
. In this case, you must copy all local groups behind the source volume to all servers behind the shadow volume. This facilitates client access to the local copy as well as the remote one. If the shadow volume site is in a different Windows domain altogether, you must duplicate all of the Windows user groups in the source volume file servers in all of the shadow volume servers.
The EMC Data Domain system has a particular CLI command designed to support the ARX proxy user: cifs option set F5
. This command accepts the domain and username of any valid Windows account, defined externally on your Windows Domain Controllers (DCs):
is the Windows domain for the proxy-user name, as defined on your DCs.
is the name of an existing Windows user, also as defined on your DCs.
Then, run the filesys restart
command. This gives the Windows user Backup Operator privileges on the Data Domain system. For example, the following command sequence provides Backup Operator privileges to jqpublic on the Data Domain system named med-dd510:
By default, the EMC Data Domain system allows access to a small number of CIFS clients with read-only access. However, an ARX volume facilitates read/write access to the Data-Domain share by any
of its CIFS clients; even if CIFS clients access this share infrequently, there is typically a wide variety of them. When you use the cifs share create
command to create a CIFS share, use the following options to prepare it for an ARX volume:
cifs share create share-name
clients * browsing enabled writeable enabled
is the name of the Data Domain share, as seen by the ARX volume.
is directory path, which is always a subdirectory of /backup.
allows all CIFS clients to access the share.
allows the ARX software to perform some necessary management functions.
allows CIFS clients and the ARX software to write to the share.
A namespace can contain up to three types of volumes: managed volumes, direct volumes, and/or shadow volumes. A managed volume
keeps metadata about all of its files and directories. Metadata is information about the location of the files on back-end filers, among other things. The managed volume uses the metadata to manage these file locations through namespace policy. A direct volume
does not have any metadata and therefore does not support any policy. A shadow volume
is a copy of a managed volume, possibly located on another ARX.
The ARX namespace metadata
is data that contains the physical location of files that the managed volumes are managing. It is used to find files and directories on the physical file systems. Each managed volume in a namespace maintains its own metadata for the file systems it manages. Metadata is maintained on a per managed volume basis with a volume being a collection of physical file systems. A namespace can consist of multiple managed volumes and each volume can have a unique metadata location.
This filer share is said to hold the volume root
. Choose the root share for each of your volumes in advance. Configure the root share first as you configure your namespace volumes.
If one share contains another (a fairly common scenario in a CIFS environment), only import one
of the shares. Overlapping shares, imported into one or more namespaces, invariably cause namespace corruption. For example, suppose you have a C:\home share that contains C:\users\jrandom and C:\users\juser. You can import C:\home into a namespace volume, or you can import one or both of the subdirectories. Do not import both
any of its subdirectories.
For a CIFS subshare with a different Access-Control List (ACL) than its parent share, you can use a special CIFS subshare
feature on the ARX. You use this feature to identify any subshares after the parent share is imported, and share them out to your CIFS clients.
Consult your filer documentation (from all vendors) concerning client access and the recommended security configurations. Pay particular attention to non-native access
to the filer. Non-native access means accessing a UNIX file through CIFS, or an NTFS (Windows) file through NFS. Of particular interest are the following questions:
As mentioned earlier, the configured proxy user must have full read/write privileges from both NFS and CIFS. The NetApps NT/UNIX user map must equate the proxy-user credentials on the NT side with root
on the UNIX side. The user map is in /etc/usermap.cfg, which you can access from an NFS client by mounting /vol/vol0. For example, this command sequence mounts the NetApp filer at 192.168.25.21 and lists the usermap.cfg file:
is the Windows domain for the proxy user (use the short version; for example, MYDOMAIN instead of MYDOMAIN.MYCOMPANY),
is the Windows username, and
EMC Celerra servers require a new, unused account for a proxy-user, immediately mapped to root
on the UNIX side. If a client has already authenticated with a particular username and password, it would be prohibitively difficult to re-map the username to root
on an EMC. EMC Release 188.8.131.52 introduces a command to resolve this problem; these instructions apply to prior releases.
identifies the data mover behind the ARX.
is the name for the proxy-user account that you created from Windows. The two zeros in the third and fourth fields are the required UID and GID for root
. The values for the remaining fields are outside the scope of this document; you can use man 5 passwd
from the EMC CLI to access the EMC documentation.
shows how the ARX acts as a resource proxy between clients and servers. In its role as a proxy, it must carry the identity of a client user through to the back-end servers. This allows for already-established Access Control Lists (ACLs) to continue their role in controlling access to files. This also makes the ARX transparent to users in an AD domain. The ARX authenticates a client once, using Kerberos, then uses the clients credentials to access any server that contains a requested file.
Special administrative privileges are required to join an F5 front-end CIFS server (F5 server
) to an AD domain. The domain-join operation has two major steps: add the F5 server to the AD domain and raise the Trusted for Delegation flag for the server. Each of these steps requires a distinct administrative privilege:
describes the basic features of each ARX. For more detailed hardware requirements, refer to System Specifications and Requirements
or refer to the ARX Hardware Reference Guide
shows an overview of the features for each ARX.
Once you reach the installation initial interview
, you can access the ARX through the serial console to configure a default administrator, a switch identifier, and out-of-band (OOB) management. Again, consult the hardware installation guides for each ARX model for the tasks involved.
Next, add CIFS and/or NFS services using the Quick Start: CIFS Storage Use Case
or the Quick Start: NFS Storage Use Case.
Use these documents to quickly aggregate your file server storage with the ARX Manager. Using the instructions in these quick starts, you can create a namespace, one managed volume, connect the volume to multiple file server shares, then offer the single volume (which aggregates your share storage) to clients as a single, virtual-service share.
Next, you can configure your storage environment using the ARX CLI Storage-Management Guide
. Consult the guide for the tasks involved in configuring the storage environment.