CadActive On-Premise Server Requirements

Modified on Wed, 31 Jan at 11:54 AM

Introduction

CadActive utilizes Docker as a means of providing standardized on-premise installation to our customers, and utilizes Kubernetes to manage and orchestrate containers.


Supported Operating Systems

  • Ubuntu 16.04* (Kernel version >= 4.15)
  • Ubuntu 18.04
  • Ubuntu 20.04 (Docker version >= 19.03.10)
  • CentOS 7.4*, 7.5*, 7.6*, 7.7*, 7.8*, 7.9, 8.0*, 8.1*, 8.2*, 8.3*, 8.4* (CentOS 8.x requires Containerd)
  • RHEL 7.4*, 7.5*, 7.6*, 7.7*, 7.8*, 7.9, 8.0*, 8.1, 8.2, 8.3, 8.4 (RHEL 8.x requires Containerd)
  • Oracle Linux 7.4*, 7.5*, 7.6*, 7.7*, 7.8*, 7.9, 8.0*, 8.1, 8.2, 8.3, 8.4, 8.5 (OL 8.x requires Containerd)
  • Amazon Linux 2

*: This version is deprecated since it is no longer supported by its creator. We continue to support it, but support will be removed in the future.


Minimum System Requirements

  • 4 CPUs or equivalent per machine
  • 8 GB of RAM per machine
  • 40 GB of Disk Space per machine.

    • Note: 10GB of the total 40GB should be available to /var/lib/rook. For more information see Rook
  • TCP ports 2379, 2380, 6443, 6783, 10250, 10251 and 10252 open between cluster nodes
  • UDP ports 6783 and 6784 open between cluster nodes


Networking Requirements

Firewall Openings for Online Installations

The following domains need to be accessible from servers performing online kURL installs. IP addresses for these services can be found in replicatedhq/ips.

HostDescription
amazonaws.comtar.gz packages are downloaded from Amazon S3 during embedded cluster installations. The IP ranges to allowlist for accessing these can be scraped dynamically from the AWS IP Address Ranges documentation.
k8s.kurl.shKubernetes cluster installation scripts and artifacts are served from kurl.sh. Bash scripts and binary executables are served from kurl.sh. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA.

No outbound internet access is required for air-gapped installations.

Ports Available

In addition to the ports listed above that must be open between nodes, the following ports should be available on the host for components to start TCP servers accepting local connections.

PortPurpose
2381etcd health and metrics server
6781weave network policy controller metrics server
6782weave metrics server
10248kubelet health server
10249kube-proxy metrics server
9100prometheus node-exporter metrics server
10257kube-controller-manager health server
10259kube-scheduler health server

Cloud Disk Performance

The following example cloud VM instance/disk combinations are known to provide sufficient performance for etcd and will pass the write latency preflight.

  • AWS m4.xlarge with 80 GB standard EBS root device
  • Azure D4ds_v4 with 8 GB ultra disk mounted at /var/lib/etcd provisioned with 2400 IOPS and 128 MB/s throughput
  • Google Cloud Platform n1-standard-4 with 50 GB pd-ssd boot disk
  • Google Cloud Platform n1-standard-4 with 500 GB pd-standard boot disk

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article