Kubernetes CKA sample exam question 54 with answer

Question
Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next, create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc. It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly
Finally, create a new Deployment safari in Namespace project-tiger, which mounts that volume at /tmp/safari-data. The pods of that Deployment should be of image httpd:2.4.41-alpine

Answer
Go to Kubernetes documentation and grab manifest for PV with hostPath. Adjust it:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: safari-pv
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/Volumes/Data"
Apply and verify:
kubectl apply -f pv.yaml
kubectl get pv
With same procedure adjust PVC manifest:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: safari-pvc
  namespace: project-tiger
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
Create it:
kubectl apply -f pv.yaml
The PVC should be in Bound state:
kubectl -n project-tiger get pvc
Generate a deployment manifest:
kubectl -n project-tiger create deploy safari --image=httpd:2.4.41-alpine --dry-run=client -o yaml
Adjust manifest to look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: safari
  name: safari
  namespace: project-tiger
spec:
  replicas: 1
  selector:
    matchLabels:
      app: safari
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: safari
    spec:
      containers:
      - image: httpd:2.4.41-alpine
        name: httpd
        volumeMounts:
        - mountPath: "/tmp/safari-data"
          name: mypvc
      volumes:
      - name: mypvc
        persistentVolumeClaim:
          claimName: safari-pvc
Apply and verify:
kubectl apply -f dep.yaml
kubectl -n project-tiger get deploy
and do a describe of a deployment pod