“We needed a consistent platform to deploy and manage containers on-premise and in the cloud. As Kubernetes has become the industry standard, it was natural for us to adopt Kubernetes Engine on GCP to reduce the risk and cost of our deployments.” - Dinesh KESWANI, Global Chief Technology Officer at HSBC
“With the Asylo toolset, Gemalto sees accelerated use of secure enclaves for high security assurance applications in cloud and container environments. Asylo makes it easy to attach container-based applications to securely isolate computations. Combining this with Gemalto’s SafeNet Data Protection On Demand paves the way to build trust across various industry applications, including; 5G, Virtual Network Functions (VNFs), Blockchain, payments, voting systems, secure analytics and others that require secure application secrets. Using Asylo, we envision our customers gaining deployment flexibility across multiple cloud environments and the assurance of meeting strict regulatory requirements for data protection and encryption key ownership.” — Todd Moore, Senior Vice President of Data Protection at Gemalto
kubectl create namespace test
kind: Namespace apiVersion: v1 metadata: name: test labels: name: test kubectl apply -f test.yaml
apiVersion: v1 kind: Pod metadata: name: mypod labels: name: mypod spec: containers: - name: mypod image: nginx
kubectl apply -f pod.yaml --namespace=test
apiVersion: v1 kind: Pod metadata: name: mypod namespace: test labels: name: mypod spec: containers: - name: mypod image: nginx
$ kubectl get pods No resources found.
$ kubectl get pods --namespace=test NAME READY STATUS RESTARTS AGE mypod 1/1 Running 0 10s
kubens test
$ kubectl get pods NAME READY STATUS RESTARTS AGE mypod 1/1 Running 0 10m
<Service Aame>.<Namespace Name>.svc.cluster.local
database.test
database.production
gcloud container clusters create example-cluster \ --scopes=compute-rw,gke-default
PROJECT_ID=$(gcloud config get-value project) PRIMARY_ACCOUNT=$(gcloud config get-value account) # Specify your cluster name. CLUSTER=cluster-1 # You may have to grant yourself permission to manage roles kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin --user $PRIMARY_ACCOUNT # Create an IAM service account for the user “gke-pod-reader”, which we will allow to read pods gcloud iam service-accounts create gke-pod-reader \ --display-name "GKE Pod Reader" \ USER_EMAIL=gke-pod-reader@$PROJECT_ID.iam.gserviceaccount.com cat > pod-reader-clusterrole.yaml<<EOF kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"] EOF kubectl create -f pod-reader-clusterrole.yaml cat > pod-reader-clusterrolebinding.yaml<<EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pod-reader-global subjects: - kind: User name: $USER_EMAIL apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: pod-reader apiGroup: rbac.authorization.k8s.io EOF kubectl create -f pod-reader-clusterrolebinding.yaml # Check the permissions of our Pod Reader user. gcloud iam service-accounts keys create \ --iam-account $USER_EMAIL pod-reader-key.json gcloud container clusters get-credentials $CLUSTER gcloud auth activate-service-account $USER_EMAIL \ --key-file=pod-reader-key.json # Our user can get/list all pods in the cluster. kubectl get pods --all-namespaces # But they can’t see the deployments, services, or nodes. kubectl get deployments --all-namespaces kubectl get services --all-namespaces kubectl get nodes # Reset gcloud and kubectl to your main user. gcloud config set account $PRIMARY_ACCOUNT gcloud container clusters get-credentials $CLUSTER
gcloud config set container/use_v1_api false
FROM node:onbuild EXPOSE 8080
FROM node:alpine WORKDIR /app COPY package.json /app/package.json RUN npm install --production COPY server.js /app/server.js EXPOSE 8080 CMD npm start
FROM golang:onbuild EXPOSE 8080
FROM golang:alpine WORKDIR /app ADD . /app RUN cd /app && go build -o goapp EXPOSE 8080 ENTRYPOINT ./goapp
FROM golang:alpine AS build-env WORKDIR /app ADD . /app RUN cd /app && go build -o goapp FROM alpine RUN apk update && \ apk add ca-certificates && \ update-ca-certificates && \ rm -rf /var/cache/apk/* WORKDIR /app COPY --from=build-env /app/goapp /app EXPOSE 8080 ENTRYPOINT ./goapp
Go Onbuild: 35 Seconds Go Multistage: 23 Seconds
Go Onbuild: 15 Seconds Go Multistage: 14 Seconds
Go Onbuild: 26 Seconds Go Multistage: 6 Seconds
Go Onbuild: 25 Seconds Go Multistage: 20 Seconds
Go Onbuild: 52 seconds Go Multistage: 6 seconds
Go Onbuild: 54 seconds Go Multistage: 28 seconds
Go Onbuild: 48 Seconds Go Multistage: 16 seconds
apiVersion: v1 kind: Pod metadata: name: kaniko spec: containers: - name: kaniko image: gcr.io/kaniko-project/executor:latest args: ["--dockerfile=<path to Dockerfile>", "--bucket=<GCS bucket>", "--destination=<gcr.io/$PROJECT/$REPO:$TAG"] volumeMounts: - name: kaniko-secret mountPath: /secret env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /secret/kaniko-secret.json restartPolicy: Never volumes: - name: kaniko-secret secret: secretName: kaniko-secret
steps: - name: gcr.io/kaniko-project/executor:latest args: ["--dockerfile=<path to Dockerfile>", "--context=<path to build context>", "--destination=<gcr.io/[PROJECT]/[IMAGE]:[TAG]>"]
Demonstrate your proficiency to design, build and manage solutions on Google Cloud Platform.