Red Hat OpenShift v1
EDB Postgres Distributed for Kubernetes is a certified operator that can be installed on OpenShift using a web interface.
Ensuring access to EDB private registry
Important
You need access to the private EDB repository where both the operator and operand images are stored. Access requires a valid EDB subscription plan. See Accessing EDB private image registries for details.
The OpenShift install uses pull secrets to access the operand and operator images, which are held in a private repository.
Once you have credentials to the private repo, you need to create
two pull secrets in the openshift-operators
namespace:
pgd-operator-pull-secret
for the EDB Postgres Distributed for Kubernetes operator imagespostgresql-operator-pull-secret
for the EDB Postgres for Kubernetes operator images
You can create each secret using the oc create
command:
Where:
@@REPOSITORY@@
is the name of the repository, as explained in Which repository to choose?.@@TOKEN@@
is the repository token for your EDB account, as explained in How to retrieve the token.
Installing the operator
The EDB Postgres Distributed for Kubernetes operator can be found in the Red Hat OperatorHub directly from your OpenShift dashboard.
From the hamburger menu, select Operators > OperatorHub.
In the web console, use the search box to filter the listing. For example, enter
EDB
orpgd
:Read the information about the operator and select Install.
In the Operator Installation page, select:
- The installation mode. Cluster-wide is currently the only mode.
- The update channel (currently preview).
- The approval strategy, following the availability on the marketplace of
a new release of the operator, certified by Red Hat:
- Automatic: OLM upgrades the running operator with the new version.
- Manual: OpenShift waits for human intervention by requiring an approval in the Installed Operators section.
Cluster-wide installation
With cluster-wide installation, you're asking OpenShift to install the
operator in the default openshift-operators
namespace and to make it
available to all the projects in the cluster.
This is the default and normally recommended approach to install EDB Postgres
Distributed for Kubernetes.
From the web console, for Installation mode, select All namespaces on the cluster (default).
On installation, the operator is visible in all namespaces. In case there
were problems during installation, check the logs in any pods in the
openshift-operators
project on the Workloads > Pods page
as you would with any other OpenShift operator.
Beware
By choosing the cluster-wide installation you, can't easily move to a single-project installation later.
Creating a PGD cluster
After the installation by OpenShift, the operator deployment
is in the openshift-operators
namespace. Notice the cert-manager operator was
also installed, as was the EDB Postgres for Kubernetes operator
(postgresql-operator-controller-manager
).
After checking that the pgd-operator-controller-manager
deployment is READY, you can
start creating PGD clusters. The EDB Postgres Distributed for Kubernetes
repository contains some useful sample files.
You must deploy your PGD clusters on a dedicated namespace/project. The default namespace is reserved.
First, then, create a new namespace, and deploy a
self-signed certificate Issuer
in it:
Using PGD in a single OpenShift cluster in a single region
Now you can deploy a PGD cluster, for example a flexible 3-region, which
contains two data groups and a witness group. You can find the YAML manifest
in the file flexible_3regions.yaml
.
Your PGD groups start to come up:
Using PGD in multiple OpenShift clusters in multiple regions
To deploy PGD in multiple OpenShift clusters in multiple regions, you must first establish a way for the PGD groups to communicate with each other. The recommended way of achieving this with multiple OpenShift clusters is to use Submariner. Configuring the connectivity is outside the scope of this documentation. However, once you've established connectivity between the OpenShift clusters, you can deploy PGD groups synced with one another.
Warning
This example assumes you're deploying three PGD groups, one in each OpenShift cluster, and that you established connectivity between the OpenShift clusters using Submariner.
Similar to the single-cluster example, this example creates two data PGD groups and one witness group. In contrast to that example, each group lives in a different OpenShift cluster.
In addition to basic connectivity between the OpenShift clusters, you need to ensure that each OpenShift cluster contains a certificate authority that's trusted by the other OpenShift clusters. This condition is required for the PGD groups to communicate with each other.
The OpenShift clusters can all use the same certificate authority, or each cluster can have its own certificate authority. Either way, you need to ensure that each OpenShift cluster's certificates trust the other OpenShift clusters' certificate authorities.
This example uses a self-signed certificate that has a single certificate authority used for all certificates on all the OpenShift clusters.
The example refers to the OpenShift clusters as OpenShift Cluster A
, OpenShift Cluster B
, and
OpenShift Cluster C
. In OpenShift, an installation of the EDB Postgres Distributed for Kubernetes operator from OperatorHub includes an
installation of the cert-manager operator. We recommend creating and managing certificates with cert-manager.
- Create a namespace to hold
OpenShift Cluster A
, and in it also create the needed objects for a self-signed certificate. Assuming that the PGD operator and the cert-manager are installed, you create a self-signed certificateIssuer
in that namespace.
- After a few moments, cert-manager creates the issuers and certificates. There are also now
two secrets in the
pgd-group
namespace:server-ca-key-pair
andclient-ca-key-pair
. These secrets contain the certificates and private keys for the server and client certificate authorities. You need to copy these secrets to the other OpenShift clusters before applying theissuer-selfsigned.yaml
manifest. You can use theoc get secret
command to get the contents of the secrets:
- After removing the content specific to
OpenShift Cluster A
from these secrets (such as uid, resourceVersion, and timestamp), you can switch context toOpenShift Cluster B
. Then create the namespace, create the secrets in it, and only then apply theissuer-selfsigned.yaml
file:
- You can switch context to
OpenShift Cluster C
and repeat the same process followed for Cluster B: