The BIG-IP system allocates all but 30 gigabytes of the total disk space to the vCMP application volume. Known as the reserve disk space, the remaining 30 gigabytes of disk space are left available for other uses, such as for installing additional versions of the BIG-IP system in the future. The vCMP disk space allocation, as well as the creation of the reserve disk space, occurs when you initially provision the vCMP feature as part of vCMP host configuration.
If you want the system to reserve more than the standard 30 gigabytes of disk space for non-vCMP uses, you must do this prior to provisioning the vCMP feature. Adjusting the reserved disk space after you have provisioned the vCMP feature can produce unwanted results.
As a vCMP host administrator, you have the important task of initially planning the amount of total system CPU and memory that you want the vCMP host to allocate to each guest. This decision is based on the resource needs of the particular BIG-IP modules that guest administrators intend to provision within each guest, as well as the maximum system resource limits for the relevant hardware platform. Thoughtful resource allocation planning prior to creating the guests ensures optimal performance of each guest. Once you have determined the resource allocation requirements for the guests, you are ready to configure the host. For more information on determining the resource needs of each guest, see Flexible Resource Allocation.
Overall, your primary duties are to provision the vCMP feature and to create and manage guests, ensuring that the proper system resources are allocated to those guests.
Performing this task allows you to access the vCMP host. Primary reasons to access the host are to create and manage vCMP guests, manage virtual disks, and view or manage host and guest properties. You can also view host and guest statistics.
The primary duties of a vCMP guest administrator are to provision BIG-IP modules within the guest, configure the correct management IP addresses for the slots pertaining to the guest, and configure any self IP addresses that the guest needs for processing application traffic. The guest administrator must also configure all BIG-IP modules, such as creating virtual servers and load balancing pools within BIG-IP Local Traffic Manager (LTM).
Optionally, a guest administrator who wants a redundant system configuration can create a device group with the peer guests as members.
For each vCMP guest, the guest administrator needs to create a unique set of management IP addresses that correspond to the slots of the VIPRION cluster. Creating these addresses ensures that if a blade becomes unavailable, the administrator can log in to another blade to access the guest.
After all guests are in the Deployed state, each individual guest administrator can configure the appropriate BIG-IP modules for processing application traffic. For example, a guest administrator can use BIG-IP Local Traffic Manager (LTM) to create a standard virtual server and a load-balancing pool. Optionally, if guest redundancy is required, a guest administrator can set up device service clustering (DSC).
After you and all guest administrators have completed the initial configuration tasks, you should have a VIPRIONsystem provisioned for vCMP, with one or more guests ready to process application traffic.
When logged in to the vCMP host, you can see the VLANs and trunks configured on the VIPRION system, as well as all of the guests that you created, along with their virtual disks. When using the BIG-IP Configuration utility, you can also display a graphical view of the number of cores that the host allocated to each guest and on which slots.
You can also view the current load on a specific guest in terms of throughput, as well as CPU, memory, and disk usage.
When logged in to a guest, the guest administrator can see one or more BIG-IP modules provisioned and configured within the guest to process application traffic. If the guest administrator configured device service clustering (DSC), the guest is a member of a device group.