11.2. Creating a DRBD resource suitable for GFS2

Since GFS is a shared cluster file system expecting concurrent read/write storage access from all cluster nodes, any DRBD resource to be used for storing a GFS filesystem must be configured in dual-primary mode. Also, it is recommended to use some of DRBD’s features for automatic recovery from split brain. Promoting the resource on both nodes and starting the GFS filesystem will be handled by Pacemaker. To prepare your DRBD resource, include the following lines in the resource configuration:

resource <resource> {
  net {
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
    ...
  }
  ...
}
[Warning]Warning

By configuring auto-recovery policies, you are configuring effectively configuring automatic data-loss! Be sure you understand the implications.

Once you have added these options to your freshly-configured resource, you may initialize your resource as you normally would. Since the allow-two-primaries option is set to yes for this resource, you will be able to promote the resource to the primary role on both nodes.

[Important]Important

Again: Be aware to configure fencing/STONITH and test the setup extensively to cover all possible use cases, especially in dual-primary setups, before going into production.

11.2.1. Enable resource fencing for dual-primary resource

In order to enable Resource fencing in DRBD you will need the sections

  disk {
        fencing resource-and-stonith;
  }

  handlers {
        fence-peer              "/usr/lib/drbd/crm-fence-peer.sh";
        after-resync-target     "/usr/lib/drbd/crm-unfence-peer.sh";
  }

in your DRBD configuration. These scripts should come with your DRBD installation.

[Warning]Warning

Don’t be misled by the shortness of the section 9.1.1. Fencing in the DRBD users guide - with all dual primary setups you have to have fencing in your cluster. See chapters 5.5. Configuring Fence Devices and 5.6. Configuring Fencing for Cluster Members in the Red Hat Cluster documentation for more details.