Loading...
 

Carbonite

Availability


Introduction

PhoenixNAP has partnered with Carbonite to leverage the Carbonite Availability software and offer seamless failover from your physical environment into the PhoenixNAP virtual or physical destinations. Our DRaaS Carbonite Availability offering is fit for any business size and supports practically all storage, platform or hardware options.

DRaaS Carbonite Availability solution provides continuous protection and instant recovery for individual files, folders or entire servers, including system settings. The underlying technology captures changes at the byte level and sends them from the source to the target in real time with virtually no impact to the performance of your servers.

Installing the Client: considerations

There are a few things to keep in mind in order to optimize any Carbonite Availability job type. These include tweaking the anti-virus settings and the storage of the temporary data.

Anti-virus

Please exclude the Carbonite Availability queue directory on the source and target from any real-time scanning or scheduled system scans. If a queue file is deleted by a process other than Carbonite Availability, unexpected results may occur, including an auto-disconnect due to the loss of queued data.
The files in the source queue directory have already been scanned (cleaned, deleted, or quarantined) in their original storage location.
The files in the target queue have already been scanned (cleaned, deleted, or quarantined) on the source.

For more information on the Carbonite Availability queue, refer to the Carbonite Availability queue documentation.

Temporary Files

Some applications create temporary files that are used to store information that may not be necessary to replicate. If user profiles and home directories are stored on a server and replicated, some unexpected data may be replicated if applications use the \Local Settings\Temp directory to store data.

This could result in significant amount of unnecessary data replication on large file servers. Additionally, the \Local Settings\Temporary Internet Files or \AppData\Local\Microsoft\Windows\Temporary Internet Files directories can easily reach a few thousand files and many megabytes in size. When this is multiplied by a hundred users it can quickly add up to several gigabytes of data that do not need to be replicated.

You may want to consider excluding temporary data like this, however, it is important to know how applications may use these temporary files.

For example, Microsoft Word creates a temporary file when a document is opened. When the user closes the file, the temporary file is renamed to the original file and the original file is deleted. In this case, you must replicate that temporary file so that Carbonite Availability can process the rename and delete operations appropriately on the target.

Push Install

In order to perform the push install of the client through the console, you need to make sure you have the necessary permissions and the user/password to access the server. You can follow the basic instructions below:

  1. Select a server in the Carbonite Availability console and click install.

    Select Server
  2. If this is for a source system, set the licenses. Input the correct license that corresponds with the purchased product. If this is a target server do not set license.

    Set License
  3. Set Default installation options. If this is a windows system you can leave the default. If Linux is selected, you need the parent folder of the installation file.
    Example Default:

    Default
    Example RPM64:

    RPM64
  4. Options for Default Windows or Linux
    If you wish to set a different target installation folder, or change queue settings, you can set those here:
  5. Uncheck Reboot automatically if needed. Especially if this is the source server.

    Reboot



Manual Install

For the detailed instructions for the Windows version, refer to this document.

This wizard is the same as the console install. The difference is that this time you need to select Server Components Only for the type of installation.

When you get the prompt about the disk queue setup, perform the configuration appropriately according to your needs.


The instructions for the Linux version are located here.

 Do not run as SUDO.


Failing Over: full server jobs

To perform a job failover:

  1. Navigate to the Jobs page.
  2. Highlight the job that you want to failover.
  3. Click Failover, Cutover, or Recover in the toolbar.
  4. Select the type of failover to perform.
    • Failover to live data. Select this option to initiate a full, live failover using the current data on the target. The source is automatically shut down if it is still running. Then the target will stand in for the source by rebooting and applying the source identity, including its system state, on the target.
      After the reboot, the target becomes the source, and the target no longer exists.
    • Perform test failover. Select this option to perform a test failover using the current data on the target. This option is like live failover, except the source is not shutdown. Therefore, you should isolate the target from the network before beginning the test. Use the appropriate procedure depending on if your target is a virtual or physical server. Note that, on a physical server, you will have to reload the operating system on the target after the test.
      • Virtual. If your target is a virtual server, use the following procedure:
        1. Stop the job.
        2. Take a snapshot of the target virtual server using your hypervisor console.
        3. Attach the target virtual server to a null virtual switch or one that does not have access to your network infrastructure.
        4. Perform the test failover and complete any testing on the virtual server.
        5. After your testing is complete, revert to the snapshot of the target virtual server from before the test started.
        6. Reconnect the target virtual server to the proper virtual switch.
        7. Restart the job.
      • Physical. If your target is a physical server, use the following procedure:
        1. Stop the job.
        2. Create a system image of the target using any desired tool.
        3. Perform the test failover and complete any testing on the physical server.
        4. After your testing is complete, use the system image to reload the system volume.
        5. Restart the job.
  5. Select how you want to handle the data in the target queue.
    • Apply data in target queues before failover or cutover. All of the data in the target queue will be applied before failover begins. The advantage to this option is that all of the data that the target has received will be applied before failover begins. The disadvantage to this option is depending on the amount of data in queue, the amount of time to apply all of the data could be lengthy.
    • Discard data in the target queues and failover or cutover immediately. All of the data in the target queue will be discarded and failover will begin immediately. The advantage to this option is that failover will occur immediately. The disadvantage is that any data in the target queue will be lost.
    • Revert to last good snapshot if target data state is bad. If the target data is in a bad state, Carbonite Availability will automatically revert to the last good Carbonite Availability snapshot before failover begins. If the target data is in a good state, Carbonite Availability will not revert the target data. Instead, Carbonite Availability will apply the data in the target queue and then failover. The advantage to this option is that good data on the target is guaranteed to be used. The disadvantage is that if the target data state is bad, you will lose any data between the last good snapshot and the failure.
  6. When you are ready to begin the failover, click Failover.

 Note.

If you have paused your target, failover will not start if configured for automatic failover, and it cannot be initiated if configured for manual intervention. You must resume the target before failover automatically starts or before you can manually start it.

Failing Over: files and folders jobs

 Note

If you are using a files and folders job in a standalone to cluster configuration, the Failover, Cutover, or Recover button will be enabled once a mirror is complete. Do not failover, as unexpected results may occur. Failover is not supported for files and folders jobs in a standalone to cluster configuration.
  1. Navigate to the Jobs page.
  2. Highlight the job that you want to failover.
  3. Select the type of failover to perform.
    • Failover to live data. Select this option to initiate a full, live failover using the current data on the target. The target will stand in for the source by assuming the network identity of the failed source. User and application requests destined for the source server or its IP addresses are routed to the target.
    • Perform test failover. This option is not available for files and folders jobs.
  4. Select how you want to handle the data in the target queue.
    • Apply data in target queues before failover or cutover. All of the data in the target queue will be applied before failover begins. The advantage to this option is that all of the data that the target has received will be applied before failover begins. The disadvantage to this option is depending on the amount of data in queue, the amount of time to apply all of the data could be lengthy.
    • Discard data in the target queues and failover or cutover immediately. All of the data in the target queue will be discarded and failover will begin immediately. The advantage to this option is that failover will occur immediately. The disadvantage is that any data in the target queue will be lost.
  5. When you are ready to begin the failover, click Failover.

 Note

If your NICs were configured for network load balancing (NLB), you will have to reconfigure that after failover.

Network Considerations

Firewalls/Ports

If your source and target are on the opposite sides of a firewall, you will need to configure your hardware to accommodate communications. You must have the hardware already in place and know how to configure the hardware ports. If you do not, see the reference manual for your hardware.

Carbonite Availability ports. Ports 6320, 6325, and 6326 are used for Carbonite Availability communications and must be open on your firewall. Open UDP and TCP for both inbound and outbound traffic.
Carbonite Availability uses ICMP pings, by default, to monitor the source for failover. You should configure your hardware to allow ICMP pings between the source and target. If you cannot, you will have to configure Carbonite Availability to monitor for a failure using the Carbonite Availability service. See "Failover Monitoring" section of Creating a full server job or Creating a files and folders job in the Carbonite Availability documentation.

Microsoft WMI and RPC ports. Some features of Carbonite Availability and its Console use WMI (Windows Management Instrumentation) which uses RPC (Remote Procedure Call). By default, RPC will use ports at random above 1024, and these ports must be open on your firewall. RPC ports can be configured to a specific range by specific registry changes and a reboot. See the Microsoft Knowledge Base article 154596 for instructions.

Microsoft File Share and Directory ports. Carbonite Availability push installations will also rely on File Share and Directory ports, which must be open on your firewall. Check your Microsoft documentation if you need to modify these ports.

Microsoft File Share uses ports 135 through 139 for TCP and UDP communications.

IP and port forwarding

Carbonite Availability supports IP and port forwarding in NAT environments with the following caveats.

  • Only IPv4 is supported.
  • Only standalone servers are supported. Clusters are not supported with NAT environments.
  • DNS failover and updates will depend on your configuration.
    • Only the source or target can be behind a router, not both.
    • The DNS server must be routable from the target.

For more information on NAT, IP and Port forwarding, see IP and port forwarding documentation.