Skip to content

Commit 75ff44f

Browse files
Add markdownlint to ensure consistent formatting (#14)
Co-authored-by: Kasper Møller <kasper.moeller@eficode.com>
1 parent 1ee813b commit 75ff44f

File tree

15 files changed

+107
-56
lines changed

15 files changed

+107
-56
lines changed

.editorconfig

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# EditorConfig is awesome: https://EditorConfig.org
2+
3+
# top-most EditorConfig file
4+
root = true
5+
6+
# Unix-style newlines with a newline ending every file
7+
[*]
8+
end_of_line = lf
9+
indent_style = space
10+
11+
[*.md]
12+
indent_size = 2
13+
insert_final_newline = true
14+
trim_trailing_whitespace = true

.github/workflows/gh-pages-pr.yaml

+14-3
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,24 @@ on:
77
paths:
88
- .pages/**
99
- docs/**
10-
- .github/workflows/gh-pages.yaml
11-
- README.md
10+
- .github/workflows/gh-pages-pr.yaml
11+
- '**.md'
1212

1313
env:
1414
PLANTUML_VERSION: '1.2024.8'
1515

1616
jobs:
17+
lint:
18+
runs-on: ubuntu-latest
19+
steps:
20+
- name: Checkout
21+
uses: actions/checkout@v4
22+
23+
- name: markdownlint-cli2-action
24+
uses: DavidAnson/markdownlint-cli2-action@v9
25+
with:
26+
globs: '**/*.md'
27+
1728
build:
1829
runs-on: ubuntu-latest
1930
steps:
@@ -25,7 +36,7 @@ jobs:
2536
with:
2637
hugo-version: '0.129.0'
2738
extended: true
28-
39+
2940
- name: Setup PlantUML
3041
run: |
3142
sudo apt-get update --yes

.github/workflows/gh-pages.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ on:
88
- .pages/**
99
- docs/**
1010
- .github/workflows/gh-pages.yaml
11-
- README.md
11+
- '**.md'
1212

1313
# Allows you to run this workflow manually from the Actions tab
1414
workflow_dispatch:

.markdownlint.yaml

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Default state for all rules
2+
default: true
3+
4+
# Path to configuration file to extend
5+
extends: null
6+
7+
# MD013/line-length : Line length : https://github.com/DavidAnson/markdownlint/blob/v0.32.1/doc/md013.md
8+
MD013: false

.pages/archetypes/adr.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@ date: "{{ time.Now.Format "2006-01-02" }}"
88
| --- | --- | --- |
99
| {"proposed \| rejected \| accepted \| deprecated \|\| superseded by ADR-0123"} | {YYYY-MM-DD when the decision was last updated} | {list everyone involved in the decision} |
1010

11-
1211
## Context and Problem Statement
1312

1413
{Describe the context and problem statement, e.g., in free form using two to three sentences or in the form of an illustrative story. You may want to articulate the problem in form of a question and add links to collaboration boards or issue management systems.}
@@ -29,4 +28,4 @@ Chosen option: "{title of option 1}", because {justification. e.g., only option,
2928

3029
* Good, because {positive consequence, e.g., improvement of one or more desired qualities, …}
3130
* Bad, because {negative consequence, e.g., compromising one or more desired qualities, …}
32-
*<!-- numbers of consequences can vary -->
31+
*<!-- numbers of consequences can vary -->

README.md

+1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
# On-prem_Kubernetes_Guide
2+
23
An opinionated guide to on-prem Kubernetes
34

45
## How to run local

contributing.md

+10-4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
# Contributing
2+
13
This guide is meant hold all of the experience from Eficodeans working with Kubernetes, distilled into one easily readable guide.
24

35
This means, that we welcome all contributions to 'what is the right tech stack'.
@@ -6,11 +8,13 @@ There are fundamentally 2 ways to contribute to this guide: recommend a tool, an
68

79
## Recommend a tool
810

9-
If you want to recommend a tool, the place to start is to write an Architecture Decision Record (ADR). All tools recommended in the guide is reflected in an ADR.
11+
If you want to recommend a tool, the place to start is to write an Architecture Decision Record (ADR). All tools recommended in the guide is reflected in an ADR.
1012

1113
To add an ADR do the following:
1214

13-
`hugo new --kind adr <DesiredFolder>/ADRs/<NameOfADRFile>.md --source .pages`
15+
```shell
16+
hugo new --kind adr <DesiredFolder>/ADRs/<NameOfADRFile>.md --source .pages
17+
```
1418

1519
Fill out the sections in the generated ADR
1620

@@ -50,8 +54,10 @@ You can use either Devbox or Dev Containers to set up a consistent development e
5054
To preview the website locally while making changes:
5155

5256
1. Run the Hugo development server:
53-
```
57+
58+
```sh
5459
hugo server --source .pages
5560
```
56-
2. Open your browser and navigate to `http://localhost:1313/On-prem_Kubernetes_Guide/ `
61+
62+
2. Open your browser and navigate to `http://localhost:1313/On-prem_Kubernetes_Guide/`
5763
3. The website will automatically refresh when you make changes to the source files

docs/_index.md

+5
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,13 @@ title: On-premises Kubernetes guide
33
---
44

55
An opinionated guide to building and running your on-prem tech-stack for running Kubernetes.
6+
67
## Introduction
8+
79
Deploying and operating Kubernetes on-premises is fundamentally different from doing so in the cloud. Without a managed control plane or provider-operated infrastructure, organizations must take full ownership of networking, security, and operational automation to ensure a stable and secure environment. The complexity of these decisions can quickly lead to fragmentation, inefficiency, and technical debt if not approached with a well-defined strategy.
810

911
This guide delivers an opinionated, battle-tested roadmap for building a production-grade on-prem Kubernetes environment, structured around three foundational pillars:
12+
1013
- [Getting your hardware ready to work with Kubernetes](hardware_ready/_index.md)
1114
- [Getting your software ready to work with Kubernetes](software_ready/_index.md)
1215
- [Working with Kubernetes](working_with_k8s/_index.md)
@@ -16,9 +19,11 @@ Instead of presenting endless options, we provide clear, prescriptive recommenda
1619
By following this approach, organizations can confidently design, deploy, and sustain an optimized, resilient, and future-compatible Kubernetes cluster, making informed decisions that balance control, flexibility, and operational efficiency from day one.
1720

1821
## Key differences between On-prem and Cloud Kubernetes
22+
1923
One of the biggest challenges of running Kubernetes on-prem is the absence of elastic cloud-based scaling, where compute and storage resources can be provisioned on demand. Instead, on-prem environments require careful capacity planning to avoid resource contention while minimizing unnecessary infrastructure costs. Additionally, the operational burden extends beyond initial deployment—day-two operations such as upgrades, observability, disaster recovery, and compliance enforcement demand greater automation and proactive management to maintain stability and performance. Without cloud-native integrations, teams must build and maintain their own ecosystem of networking, storage, and security solutions, ensuring that each component is optimized for reliability and maintainability. These factors make on-prem Kubernetes deployments more complex but also provide greater control over cost, security, and regulatory compliance.
2024

2125
## Document Structure
26+
2227
With the introduction and key differences out of the way, we can now get into the important parts of the document. As mentioned in the introduction, the document is structured around three foundational pillars, namely:
2328

2429
- [Getting your hardware ready to work with Kubernetes](hardware_ready/_index.md)

docs/guide.md

+14-1
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,15 @@
1+
---
2+
title: On-premises Kubernetes guide
3+
---
14

25
An opinionated guide to building and running your on-prem tech-stack for running Kubernetes.
6+
37
## Introduction
8+
49
Deploying and operating Kubernetes on-premises is fundamentally different from doing so in the cloud. Without a managed control plane or provider-operated infrastructure, organizations must take full ownership of networking, security, and operational automation to ensure a stable and secure environment. The complexity of these decisions can quickly lead to fragmentation, inefficiency, and technical debt if not approached with a well-defined strategy.
510

611
This guide delivers an opinionated, battle-tested roadmap for building a production-grade on-prem Kubernetes environment, structured around three foundational pillars:
12+
713
- Getting your hardware ready to work with Kubernetes
814
- Getting your software ready to work with Kubernetes
915
- Working with Kubernetes
@@ -13,9 +19,11 @@ Instead of presenting endless options, we provide clear, prescriptive recommenda
1319
By following this approach, organizations can confidently design, deploy, and sustain an optimized, resilient, and future-compatible Kubernetes cluster, making informed decisions that balance control, flexibility, and operational efficiency from day one.
1420

1521
## Key differences between On-prem and Cloud Kubernetes
22+
1623
One of the biggest challenges of running Kubernetes on-prem is the absence of elastic cloud-based scaling, where compute and storage resources can be provisioned on demand. Instead, on-prem environments require careful capacity planning to avoid resource contention while minimizing unnecessary infrastructure costs. Additionally, the operational burden extends beyond initial deployment—day-two operations such as upgrades, observability, disaster recovery, and compliance enforcement demand greater automation and proactive management to maintain stability and performance. Without cloud-native integrations, teams must build and maintain their own ecosystem of networking, storage, and security solutions, ensuring that each component is optimized for reliability and maintainability. These factors make on-prem Kubernetes deployments more complex but also provide greater control over cost, security, and regulatory compliance.
1724

1825
## Document Structure
26+
1927
With the introduction and key differences out of the way, we can now get into the important parts of the document. As mentioned in the introduction, the document is structured around three foundational pillars, namely:
2028

2129
- Getting your hardware ready to work with Kubernetes
@@ -25,7 +33,9 @@ With the introduction and key differences out of the way, we can now get into th
2533
For each of these pillars, we will be providing you with primary and secondary recommendations regarding tech-stack and any accompanying tools. These recommendations will go over the tools themselves and provide you with arguments for choosing them, as well as listing out common pitfalls and important points of consideration.
2634

2735
## Getting your hardware ready to work with Kubernetes
36+
2837
### Virtualisation or bare metal
38+
2939
One important aspect is to determine whether the clusters should run on an OS directly on the machines, or if it makes sense to add a virtualisation layer.
3040

3141
Running directly on the hardware gives you a 1-1 relationship between the machines and the nodes. This is not always advised if the machines are particularly beefy. Running directly on the hardware will of course have lower latency than when adding a virtualisation layer.
@@ -35,16 +45,19 @@ A virtualisation layer can benefit via abstracting the actual hardware, and enab
3545
In case virtualisation is chosen, the below recommendations are what you would run in your VM. For setting up your VM’s we recommend Talos with KubeVirt.
3646

3747
### Decision Matrix
48+
3849
| Problem domain | Description | Reason for importance | Primary tool recommendation | Secondary tool recommendation |
3950
|:---:|:---:|:---:|:---:|:---:|
4051
| Kubernetes Node Operating System | The Operating System running on each of the hosts that will be part of your Kubernetes cluster | Choosing the right OS will be the foundation for building a production-grade Kubernetes cluster | Talos Linux | Flatcar Linux |
4152
| Storage solution | The underlying storage capabilities which Kubernetes will leverage to provide persistence for stateful workloads | Choosing the right storage solution for your clusters needs is important as there is a lot of balance tradeoffs associated with it, e.g redundancy vs. complexity | Longhorn (iscsi) OpenEBS (iscsi) | Rook Ceph |
4253
| Container Runtime (CRI) | The software that is responsible for running containers | You need a working container runtime on each node in your cluster, so that the kubelet can launch pods and their containers | Containerd (embedded in Talos??? But maybe always containerd anyways?) | |
4354
| Network plugin (CNI) | Plugin used for cluster networking | A CNI plugin is required to implement the Kubernetes network model | Cillium? | Calico? |
4455

45-
4656
## Getting your software ready to work with Kubernetes
57+
58+
<!-- markdownlint-disable MD024 -->
4759
### Decision Matrix
60+
4861
| Problem domain | Description | Reason for importance | Primary tool recommendation | Secondary tool recommendation |
4962
|:---:|:---:|:---:|:---:|:---:|
5063
| Image Registry | A common place to store and fetch images | High availability, secure access control | Harbor | Sonatype Nexus |

docs/hardware_ready/ADRs/Cilium_as_network_plugin.md

+8-18
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,9 @@ title: Use Cilium as Network Plugin
66
| --- | --- | --- |
77
| proposed | 2025-02-18 | Alexandra Aldershaab, Steffen Petersen |
88

9-
109
## Context and Problem Statement
1110

12-
A CNI plugin is required to implement the Kubernetes network model by assigning IP addresses from preallocated CIDR ranges
13-
to pods and nodes. The CNI plugin is also responsible for enforcing network policies that control how traffic flows between
14-
namespaces as well as between the cluster and the internet.
11+
A CNI plugin is required to implement the Kubernetes network model by assigning IP addresses from preallocated CIDR ranges to pods and nodes. The CNI plugin is also responsible for enforcing network policies that control how traffic flows between namespaces as well as between the cluster and the internet.
1512

1613
## Considered Options
1714

@@ -21,9 +18,7 @@ namespaces as well as between the cluster and the internet.
2118

2219
## Decision Outcome
2320

24-
Chosen option: **Cilium**, because it is a fully conformant CNI plugin that works in both cloud and on-premises environments
25-
while also providing support for network policies as well as more advanced networking features. Cilium has also gained
26-
rapid adoption in the Kubernetes community and is considered the future standard of CNI plugins.
21+
Chosen option: **Cilium**, because it is a fully conformant CNI plugin that works in both cloud and on-premises environments while also providing support for network policies as well as more advanced networking features. Cilium has also gained rapid adoption in the Kubernetes community and is considered the future standard of CNI plugins.
2722

2823
Flannel was considered, but it does not support network policies which is considered a hard requirement.
2924

@@ -32,14 +27,9 @@ Calico, while supporting Network policies, falls short compared to Cilium in ter
3227
### Consequences
3328

3429
* Good, because Cilium provides support for network policies on L7 as well as the usual L3/L4.
35-
* Good, because Cilium provides support for BGP controlplane integration, allowing for seamless integration with existing
36-
networking infrastructure.
37-
* Good, because Cilium provides a feature called Egress Gateway which allows for traffic exiting the cluster to be routed
38-
through specific nodes, facilitating smooth integration with existing security infrastructure such as IP-based firewalls.
39-
* Good, because Cilium comes with a utility called Hubble which provides deep observability into the network traffic, allowing
40-
for easy debugging and troubleshooting of network issues.
41-
42-
* Bad, because Cilium requires you to understand both Kubernetes networking and tradition networking concepts to fully utilize
43-
its advanced features.
44-
* Bad, because Cilium does not come installed by default on any flavor of Kubernetes, requiring additional steps to
45-
install it and provide necessary custom configuration.
30+
* Good, because Cilium provides support for BGP controlplane integration, allowing for seamless integration with existing networking infrastructure.
31+
* Good, because Cilium provides a feature called Egress Gateway which allows for traffic exiting the cluster to be routed through specific nodes, facilitating smooth integration with existing security infrastructure such as IP-based firewalls.
32+
* Good, because Cilium comes with a utility called Hubble which provides deep observability into the network traffic, allowing for easy debugging and troubleshooting of network issues.
33+
34+
* Bad, because Cilium requires you to understand both Kubernetes networking and tradition networking concepts to fully utilize its advanced features.
35+
* Bad, because Cilium does not come installed by default on any flavor of Kubernetes, requiring additional steps to install it and provide necessary custom configuration.

docs/hardware_ready/ADRs/talos_as_os.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,10 @@ date: "2025-02-25"
88
| --- | --- | --- |
99
| proposed | 2025-02-25 | Sofus Albertsen |
1010

11-
1211
## Context and Problem Statement
1312

1413
Choosing the right operating system for your Kubernetes cluster is crucial for stability, security, and operational efficiency. The OS should be optimized for container workloads, minimize overhead, and integrate well with Infrastructure as Code (IaC) practices.
14+
1515
## Considered Options
1616

1717
* Talos OS
@@ -37,4 +37,4 @@ While their dashboards can simplify initial setup, they can also encourage "clic
3737

3838
* **Bad:** The learning curve for Talos OS might be steeper initially for teams unfamiliar with its API-driven approach.
3939
* **Bad:** The lack of a graphical user interface might be a drawback for some users accustomed to traditional OS management.
40-
* **Bad:** Talos is a relatively newer project compared to OpenShift or Rancher, therefore community support and available resources might be smaller.
40+
* **Bad:** Talos is a relatively newer project compared to OpenShift or Rancher, therefore community support and available resources might be smaller.

docs/hardware_ready/_index.md

+2
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22
title: Getting your hardware ready
33
---
44
## Virtualisation or bare metal
5+
56
One important aspect is to determine whether the clusters should run on an OS directly on the machines, or if it makes sense to add a virtualisation layer.
67

78
Running directly on the hardware gives you a 1-1 relationship between the machines and the nodes. This is not always advised if the machines are particularly beefy. Running directly on the hardware will of course have lower latency than when adding a virtualisation layer.
@@ -11,6 +12,7 @@ A virtualisation layer can benefit via abstracting the actual hardware, and enab
1112
In case virtualisation is chosen, the below recommendations are what you would run in your VM. For setting up your VM’s we recommend Talos with KubeVirt.
1213

1314
## Decision Matrix
15+
1416
| Problem domain | Description | Reason for importance | Tool recommendation |
1517
|:---:|:---:|:---:|:---:|
1618
| Kubernetes Node Operating System | The Operating System running on each of the hosts that will be part of your Kubernetes cluster | Choosing the right OS will be the foundation for building a production-grade Kubernetes cluster | [Talos OS](hardware_ready/ADRs/talos_as_os.md) |

0 commit comments

Comments
 (0)