Firefly Open Source Community

   Login   |   Register   |
New_Topic
Print Previous Topic Next Topic

[General] CKS日本語独学書籍 & CKS学習教材

126

Credits

0

Prestige

0

Contribution

registered members

Rank: 2

Credits
126

【General】 CKS日本語独学書籍 & CKS学習教材

Posted at 11 hour before      View:7 | Replies:0        Print      Only Author   [Copy Link] 1#
さらに、Jpexam CKSダンプの一部が現在無料で提供されています:https://drive.google.com/open?id=1tFt_6Qc-KUV399ED0OEEtUZZ7t_ip5AL
現在の状況に満足することは決してなく、CKS試験実践ガイドを必ず拡張および更新してください。イノベーションに焦点を当て、専門チームを編成して新しい知識ポイントをまとめ、テストバンクを更新します。私たちはクライアントを神として扱い、CKS学習教材へのサポートを前進の原動力として扱います。そのため、クライアントはCKS試験問題に関する最新のイノベーションの結果を楽しんで、より多くの学習リソースを獲得できます。クレジットは、私たちの勤勉で献身的な専門技術革新チームと専門家に帰属します。
CKS認証は、Kubernetesとコンテナ化されたアプリケーションに取り組むITプロフェッショナル、セキュリティプロフェッショナル、DevOpsエンジニア、システム管理者、開発者を対象としています。認証は、Kubernetesコンポーネントの保護、コンテナイメージの保護、ネットワーク通信の保護、セキュリティポリシーの実装など、様々なKubernetesセキュリティトピックに対する専門的な知識を証明することを求めます。
CKS試験は、Kubernetesクラスターを安全にする能力を試験するハンズオンのパフォーマンス試験です。この試験は、Kubernetes管理者が直面する可能性のある現実世界の状況をシミュレートした17のシナリオで構成されています。これらのシナリオは、Kubernetesセキュリティの概念を理解し、一般的な脆弱性を特定して軽減する能力、Kubernetesクラスターを安全にするためのベストプラクティスに対する理解をテストするように設計されています。この試験はオンラインで実施され、世界中のどこでも受験することができます。候補者はCKS認定を取得するために試験に合格する必要があり、その認定は2年間有効です。
CKS学習教材、CKS技術試験Jpexamは多くのIT職員の夢を達成することであるウェブサイトです。IT夢を持っていたら、速くJpexamに来ましょう。 Jpexamにはすごいトレーニング即ち Linux FoundationのCKS試験トレーニング資料があります。これはIT職員の皆が熱望しているものです。あなたが試験に合格することを助けられますから。
CKS認定試験は、Kubernetesのセキュリティ機能に関する候補者の理解と、Kubernetesプラットフォームとコンテナ化されたアプリケーションの保護におけるベストプラクティスを実装する機能をテストするように設計されています。認証試験では、Kubernetes API認証と認証、ネットワークセキュリティ、ストレージセキュリティ、セキュリティポリシーの実装など、幅広いトピックをカバーしています。
Linux Foundation Certified Kubernetes Security Specialist (CKS) 認定 CKS 試験問題 (Q27-Q32):質問 # 27
Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.
Fix all of the following violations that were found against the API server:- a. Ensure the --authorization-mode argument includes RBAC b. Ensure the --authorization-mode argument includes Node c. Ensure that the --profiling argument is set to false Fix all of the following violations that were found against the Kubelet:- a. Ensure the --anonymous-auth argument is set to false.
b. Ensure that the --authorization-mode argument is set to Webhook.
Fix all of the following violations that were found against the ETCD:-
a. Ensure that the --auto-tls argument is not set to true
Hint: Take the use of Tool Kube-Bench
正解:
解説:
API server:
Ensure the --authorization-mode argument includes RBAC
Turn on Role Based Access Control. Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode.
Fix - Buildtime
Kubernetes
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
+ - kube-apiserver
+ - --authorization-mode=RBAC,Node
image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver-should-pass
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/
name: k8s
readOnly: true
- mountPath: /etc/ssl/certs
name: certs
- mountPath: /etc/pki
name: pki
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes
name: k8s
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/pki
name: pki
Ensure the --authorization-mode argument includes Node
Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
Audit:
/bin/ps -ef | grep kube-apiserver | grep -v grep
Expected result:
'Node,RBAC' has 'Node'
Ensure that the --profiling argument is set to false
Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the below parameter.
--profiling=false
Audit:
/bin/ps -ef | grep kube-apiserver | grep -v grep
Expected result:
'false' is equal to 'false'
Fix all of the following violations that were found against the Kubelet:- Ensure the --anonymous-auth argument is set to false.
Remediation: If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false. If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Audit:
/bin/ps -fC kubelet
Audit Config:
/bin/cat /var/lib/kubelet/config.yaml
Expected result:
'false' is equal to 'false'
2) Ensure that the --authorization-mode argument is set to Webhook.
Audit
docker inspect kubelet | jq -e '.[0].Args[] | match("--authorization-mode=Webhook").string' Returned Value: --authorization-mode=Webhook Fix all of the following violations that were found against the ETCD:- a. Ensure that the --auto-tls argument is not set to true Do not use self-signed certificates for TLS. etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service.
Fix - Buildtime
Kubernetes
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
+ - etcd
+ - --auto-tls=true
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.22.9]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd-should-fail resources: {} volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
status: {}

質問 # 28
Context:
Cluster: gvisor
Master node: master1
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context gvisor
Context: This cluster has been prepared to support runtime handler, runsc as well as traditional one.
Task:
Create a RuntimeClass named not-trusted using the prepared runtime handler names runsc.
Update all Pods in the namespace server to run on newruntime.
正解:
解説:
Find all the pods/deployment and edit runtimeClassName parameter to not-trusted under spec
[desk@cli] $ k edit deploy nginx
spec:
runtimeClassName: not-trusted. # Add this
Explanation
[desk@cli] $vim runtime.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: not-trusted
handler: runsc
[desk@cli] $ k apply -f runtime.yaml
[desk@cli] $ k get pods
NAME READY STATUS RESTARTS AGE
nginx-6798fc88e8-chp6r 1/1 Running 0 11m
nginx-6798fc88e8-fs53n 1/1 Running 0 11m
nginx-6798fc88e8-ndved 1/1 Running 0 11m
[desk@cli] $ k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 11 3 5m
[desk@cli] $ k edit deploy nginx


質問 # 29
SIMULATION
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context dev
A default-deny NetworkPolicy avoid to accidentally expose a Pod in a namespace that doesn't have any other NetworkPolicy defined.
Task: Create a new default-deny NetworkPolicy named deny-network in the namespace test for all traffic of type Ingress + Egress The new NetworkPolicy must deny all Ingress + Egress traffic in the namespace test.
Apply the newly created default-deny NetworkPolicy to all Pods running in namespace test.
You can find a skeleton manifests file at /home/cert_masters/network-policy.yaml
正解:
解説:
See the Explanation below
Explanation:
master1 $ k get pods -n test --show-labels
NAME READY STATUS RESTARTS AGE LABELS
test-pod 1/1 Running 0 34s role=test,run=test-pod
testing 1/1 Running 0 17d run=testing
$ vim netpol.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-network
namespace: test
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
master1 $ k apply -f netpol.yaml
Explanation:
controlplane $ k get pods -n test --show-labels
NAME READY STATUS RESTARTS AGE LABELS
test-pod 1/1 Running 0 34s role=test,run=test-pod
testing 1/1 Running 0 17d run=testing
master1 $ vim netpol1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-network
namespace: test
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
master1 $ k apply -f netpol1.yaml
Reference:
https://kubernetes.io/docs/conce ... g/network-policies/ Explanation:
controlplane $ k get pods -n test --show-labels
NAME READY STATUS RESTARTS AGE LABELS
test-pod 1/1 Running 0 34s role=test,run=test-pod
testing 1/1 Running 0 17d run=testing
master1 $ vim netpol1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-network
namespace: test
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
master1 $ k apply -f netpol1.yaml
https://kubernetes.io/docs/conce ... g/network-policies/

質問 # 30
SIMULATION
Context
You must fully integrate a container image scanner into the kubeadm provisioned cluster.
Task
Given an incomplete configuration located at /etc/kubernetes/bouncer and a functional container image scanner with an HTTPS endpoint at https://smooth-yak.local/review, perform the following tasks to implement a validating admission controller.
First, re-configure the API server to enable all admission plugin(s) to support the provided AdmissionConfiguration.
Next, re-configure the ImagePolicyWebhook configuration to deny images on backend failure.
Next, complete the backend configuration to point to the container image scanner's endpoint at https://smooth-yak.local/review.
Finally, to test the configuration, deploy the test resource defined in /home/candidate/vulnerable.yaml which is using an image that should be denied.
You may delete and re-create the resource as often as needed.
The container image scanner's log file is located at /var/log/nginx/access_log.
正解:
解説:
See the Explanation below for complete solution
Explanation:
Below is the CKS exam style "do-this-exactly" runbook for Q3. It includes the minimal discovery commands (so you don't guess filenames), then the exact lines/blocks to set.
QUESTION 3 - ImagePolicyWebhook (Validating Admission) - Exam Steps
0) SSH + root
ssh cks000002
sudo -i
1) Identify the provided config files (no guessing)
ls -la /etc/kubernetes/bouncer
You are looking for files typically named like:
admission_configuration.yaml (AdmissionConfiguration)
imagepolicywebhook.yaml (ImagePolicyWebhookConfiguration) OR the ImagePolicyWebhook config embedded inside the AdmissionConfiguration kubeconfig (webhook kubeconfig) If unsure which is which, quick peek:
grep -R "ImagePolicyWebhook" -n /etc/kubernetes/bouncer
grep -R "AdmissionConfiguration" -n /etc/kubernetes/bouncer
grep -R "kubeconfig" -n /etc/kubernetes/bouncer
PART A - Reconfigure API Server to enable required admission plugin(s)
2) Edit API server static pod manifest
vi /etc/kubernetes/manifests/kube-apiserver.yaml
2.1 Enable the admission plugin ImagePolicyWebhook
Find the line starting with:
- --enable-admission-plugins=
Ensure ImagePolicyWebhook is included in that comma list.
Example (your list may differ; just add ImagePolicyWebhook):
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
If the flag does not exist, add one line under command::
- --enable-admission-plugins=ImagePolicyWebhook
2.2 Point API server to the provided AdmissionConfiguration
In the same file, ensure this flag exists (use the file in /etc/kubernetes/bouncer that contains AdmissionConfiguration):
- --admission-control-config-file=/etc/kubernetes/bouncer/admission_configuration.yaml If your file is named differently, use the real filename you found in step 1, but keep the flag name exactly --admission-control-config-file.
Save/exit:
:wq
Static pod will restart automatically (kubelet watches the manifest).
Optional quick watch:
docker ps | grep kube-apiserver
# or:
crictl ps | grep kube-apiserver
PART B - Configure ImagePolicyWebhook to deny images on backend failure
3) Edit the ImagePolicyWebhook config
One of these is true on your cluster:
Option 1 (most common in these tasks): ImagePolicyWebhook config is a standalone file Edit the file in /etc/kubernetes/bouncer that contains kind: ImagePolicyWebhookConfiguration:
grep -R "kind: ImagePolicyWebhookConfiguration" -n /etc/kubernetes/bouncer vi /etc/kubernetes/bouncer/<THE_FILE_YOU_FOUND>.yaml Set (or ensure) exactly:
defaultAllow: false
Option 2: ImagePolicyWebhook config is embedded inside AdmissionConfiguration Edit the AdmissionConfiguration file:
vi /etc/kubernetes/bouncer/admission_configuration.yaml
Find the plugin section for ImagePolicyWebhook and ensure the config includes:
defaultAllow: false
✅ Save/exit:
:wq
PART C - Point backend configuration to https://smooth-yak.local/review
4) Edit the webhook kubeconfig to use the scanner endpoint
Find the kubeconfig file referenced by the ImagePolicyWebhook config.
Search for kubeConfigFile:
grep -R "kubeConfigFile" -n /etc/kubernetes/bouncer
Open that kubeconfig path (example name below; yours may differ):
vi /etc/kubernetes/bouncer/kubeconfig
In kubeconfig, set the cluster server exactly:
clusters:
- cluster:
server: https://smooth-yak.local/review
✅ Save/exit:
:wq
PART D - Restart effect (make sure API server picks up config)
Because you already edited /etc/kubernetes/manifests/kube-apiserver.yaml, the API server restarted.
To be safe (and fast), force a restart by "touching" the manifest (no content change needed):
touch /etc/kubernetes/manifests/kube-apiserver.yaml
PART E - Test: apply vulnerable workload and confirm it is denied
5) Use admin kubeconfig (because old kubectl config may break)
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl get nodes
6) Deploy the test resource (should be DENIED)
kubectl apply -f /home/candidate/vulnerable.yaml
Expected: admission error/denied message.
If it already exists:
kubectl delete -f /home/candidate/vulnerable.yaml
kubectl apply -f /home/candidate/vulnerable.yaml
PART F - Verify the scanner was called (log check)
7) Check scanner access log
tail -n 50 /var/log/nginx/access_log
You should see requests hitting /review.
Quick "what to check if it doesn't deny"
Run these in order:
Confirm API server flags:
grep -n "enable-admission-plugins" /etc/kubernetes/manifests/kube-apiserver.yaml grep -n "admission-control-config-file" /etc/kubernetes/manifests/kube-apiserver.yaml Confirm deny-on-failure:
grep -R "defaultAllow" -n /etc/kubernetes/bouncer
Must show:
defaultAllow: false
Confirm endpoint:
grep -R "server: https://smooth-yak.local/review" -n /etc/kubernetes/bouncer API server logs (docker runtime):
docker ps | grep kube-apiserver
docker logs $(docker ps -q --filter name=kube-apiserver) --tail 80
If you paste the output of:
ls -/etc/kubernetes/bouncer
grep -R "kind: AdmissionConfiguration" -n /etc/kubernetes/bouncer
grep -R "ImagePolicyWebhook" -n /etc/kubernetes/bouncer

質問 # 31
Task
Create a NetworkPolicy named pod-access to restrict access to Pod users-service running in namespace dev-team.
Only allow the following Pods to connect to Pod users-service:


正解:
解説:





質問 # 32
......
CKS学習教材: https://www.jpexam.com/CKS_exam.html
P.S. JpexamがGoogle Driveで共有している無料かつ新しいCKSダンプ:https://drive.google.com/open?id=1tFt_6Qc-KUV399ED0OEEtUZZ7t_ip5AL
Reply

Use props Report

You need to log in before you can reply Login | Register

This forum Credits Rules

Quick Reply Back to top Back to list