Install Elastic Cloud on Kubernetes (Elastic Kubernetes Operator) using the CLI
Elastic Cloud on Kubernetes (ECK) is an official open source operator designed especially for the Elastic Stack (ELK). It lets you automatically deployment, provisioning, management, and orchestration of Elasticsearch, Kibana, APM Server, Beats, Enterprise Search, Elastic Agent and Elastic Maps Server on Kubernetes. ECK provides features like monitoring clusters, automated upgrades, scheduled backups, and dynamic scalability of local storage.
Following the instructions in the Red Hat OpenShift documentation Install cluster logging using the CLI
In this section, you will:
- Create Namespaces for the Elasticsearch Operator and for the Cluster Logging Operator
- Install the OpenShift Elasticsearch Operator's
- Operator Group
- Subscription
- Install the Cluster Logging Operator's
- Operator Group
- Subscription
- Instance
- Verify the installation by listing the pods
Definitions
Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters.
Operator Group provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators.
Subscription keeps operators up to date by tracking changes to Catalogs. A subscription is optional.
See what is available in the OperatorHub marketplace
Get list from marketplace
Returns a long list that includes elasticsearch-operator
.
- Inspect the desired Operator
In the description, it recommends that this operator be installed in the openshift-operators-redhat
namespace to
properly support the Cluster Logging and Jaeger use cases.
Steps to deploy Elasticsearch from CLI
1. Create Namespaces
Create a namespace for the OpenShift Elasticsearch Operator:
cat <<EOF | oc create -f -
apiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true"
EOF
Note: You must specify the openshift-operators-redhat
namespace.
Next, create a namespace for the Cluster Logging Operator:
cat <<EOF | oc create -f -
apiVersion: v1
kind: Namespace
metadata:
name: openshift-logging
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true"
EOF
2. Install the OpenShift Elasticsearch Operator by creating:
- Operator Group object
- Subscription object
An Operator group, defined by an OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.
If the Operator you intend to install uses the
AllNamespaces
, then the openshift-operators namespace already has an appropriate Operator group in place.
cat <<EOF | oc create -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-operators-redhat
namespace: openshift-operators-redhat
spec: {}
EOF
A Subscription object describes the namespace to the OpenShift Elasticsearch Operator.
cat <<EOF | oc create -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: "elasticsearch-operator"
namespace: "openshift-operators-redhat"
spec:
channel: "4.6"
installPlanApproval: "Automatic"
source: "redhat-operators"
sourceNamespace: "openshift-marketplace"
name: "elasticsearch-operator"
EOF
You must specify the openshift-operators-redhat
namespace.
Specify 4.6
as the channel.
Verify the Operator installation.
There should be an OpenShift Elasticsearch Operator in each namespace.
3. Install the Cluster Logging Operator
Create a OperatorGroup for the Cluster Logging Operator and subscribe to the namespace using:
cat <<EOF | oc create -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
targetNamespaces:
- openshift-logging
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
channel: "4.6"
name: cluster-logging
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
Verify using:
4. Create a Cluster Logging instance
Create an instance object for Cluster Logging Operator.
cat <<EOF | oc create -f -
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
retentionPolicy:
application:
maxAge: 1d
infra:
maxAge: 7d
audit:
maxAge: 7d
elasticsearch:
nodeCount: 3
storage:
storageClassName: "<storage-class-name>"
size: 200G
resources:
requests:
memory: "8Gi"
proxy:
resources:
limits:
memory: 256Mi
requests:
memory: 256Mi
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana"
kibana:
replicas: 1
curation:
type: "curator"
curator:
schedule: "30 3 * * *"
collection:
logs:
type: "fluentd"
fluentd: {}
EOF
This creates the Cluster Logging components, the Elasticsearch custom resource and components, and the Kibana interface.
5. Verify logging
Verify using:
6. Post-installation tasks
If you plan to use Kibana, you must manually create your Kibana index patterns and visualizations to explore and visualize data in Kibana.
If your cluster network provider enforces network isolation, allow network traffic between the projects that contain the OpenShift Logging operators.