
Description:
- Container orchestration engine by:
- Automating deployment, scaling, and management of containerized applications.
- Docs and Ref
- Frontend refers to backend Kubernetes service so if the backend ip change, the service will update it
- Why?
- manage multiple hosts container orchestration
- auto-scaling
- load-balancing
- self-healing
- rolling updates and rollbacks
- Some other distribution (like Ubuntu is distribution of linux, help with connecting nodes, networking (stuffs done by kubeadm)):
- 1 cluster is 1 context
- v1.34
A. Documentations
v1.34
B. Getting started
1. Learning env
2. Prod env
3. Best practices
C. Concepts:
1. Overview:
Components:
2. Cluster Architecture:
3. Containers:
4. Workloads:
Workload management:
Autoscaling workloads
Managing workloads
Vertical pod autoscaling
5. Service, load balancing and networking:
- The Kubernetes network model***
- each pod in a cluster gets its own cluster-wide unique IP address
- containers in a pod are in same namespace, communicated with eachother over
localhost
- The pod network (cluster network) handles communication between pods, ensure that
- all pods can communicate with all other pods, same or different node, without proxy or NAT
- agents on a node (system daemons, or kubelet) can communicate with all pods on that node
- Kubernetes service API create a long-lived IP or hostname for a service implemented by one or more backend pods
- K8s Gateway API allows you to make services accessible to clients that are outside the cluster
- K8s Network Policy is a built-in Kubernetes API that allows you to control traffic between pods, or between pods and the outside world.
…
…
6. Storage:
7. Configuration
8. Security
9. Policies
10. Scheduling, preemption and eviction
11. Cluster administration:
Cluster networking
12. Windows in kubernetes
13. Extending kubernetes
D. Tasks:
1.
6.
8. Run applications:
9. Run jobs
E. Tutorials
F. Reference
Kubernetes API
Workload resources
Networking reference