<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[MedInvention]]></title><description><![CDATA[Lab's stories and ideas.]]></description><link>https://blog.medinvention.dev/</link><generator>Ghost 3.11</generator><lastBuildDate>Sun, 12 Apr 2026 02:14:17 GMT</lastBuildDate><atom:link href="https://blog.medinvention.dev/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Vault & Consul Kubernetes Deployment]]></title><description><![CDATA[<p>Following this post, you will be able to deploy, configure and use an <a href="https://www.vaultproject.io/" rel="nofollow">HashiCorp Vault</a> with <a href="https://www.consul.io/" rel="nofollow">Hashicorp Consul</a>, to try it in your Kubernetes Cluster with sample application.</p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2021/02/vault.png" class="kg-image"></figure><blockquote>Secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI,</blockquote>]]></description><link>https://blog.medinvention.dev/vault-consul-kubernetes-deployment/</link><guid isPermaLink="false">602e508a2076420001b322d2</guid><category><![CDATA[vault]]></category><category><![CDATA[consul]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Thu, 18 Feb 2021 12:08:26 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1565126111587-f9fb04a432e4?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDIxfHx2YXVsdHxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1565126111587-f9fb04a432e4?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDIxfHx2YXVsdHxlbnwwfHx8&ixlib=rb-1.2.1&q=80&w=2000" alt="Vault & Consul Kubernetes Deployment"><p>Following this post, you will be able to deploy, configure and use an <a href="https://www.vaultproject.io/" rel="nofollow">HashiCorp Vault</a> with <a href="https://www.consul.io/" rel="nofollow">Hashicorp Consul</a>, to try it in your Kubernetes Cluster with sample application.</p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2021/02/vault.png" class="kg-image" alt="Vault & Consul Kubernetes Deployment"></figure><blockquote>Secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.</blockquote><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2021/02/consul.png" class="kg-image" alt="Vault & Consul Kubernetes Deployment"></figure><blockquote>Consul is a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality. Each of these features can be used individually as needed, or they can be used together to build a full-service mesh</blockquote><h2 id="1-stack-">1- Stack :</h2><ul><li>Kubelet : <strong>v1.17.2 / v1.18.5</strong></li><li>Kubectl : <strong>v1.17.1</strong></li><li>Docker : <strong>19.03.5 / 19.03.8</strong></li><li>Consul : <strong>1.9.3</strong></li><li>Vault : <strong>1.6.2 (Agent 0.8.0)</strong></li><li>Cfssl : <strong>1.2.0</strong></li><li>Kube namespace : <strong>vault</strong> <em>(if you use a different namespace, it must be changed in service and pod hostnames)</em></li><li>Architecture : <strong>AMD64 / ARM64</strong></li></ul><h2 id="2-consul-deployment-">2- Consul deployment :</h2><p>1.	First, generate SSL certificates for Consul (can be done in workstation) with <a href="https://cfssl.org/" rel="nofollow">Cfssl</a>, after editing configuration files in <a href="https://github.com/mmohamed/vault-kubernetes/blob/main/consul/ca">consul/ca</a> directory</p><!--kg-card-begin: markdown--><pre><code># Generate CA and sign request for Consul
cfssl gencert -initca ca/ca-csr.json | cfssljson -bare ca 
# Generate SSL certificates for Consul
cfssl gencert \ 
-ca=ca.pem \ 
-ca-key=ca-key.pem \ 
-config=ca/ca-config.json \ 
-profile=default \ 
ca/consul-csr.json | cfssljson -bare consul 
# Perpare a GOSSIP key for Consul members communication encryptation
GOSSIP_ENCRYPTION_KEY=$(consul keygen)</code></pre>
<!--kg-card-end: markdown--><p>2.	Create secret with Gossip key and public/private keys</p><!--kg-card-begin: markdown--><pre><code>kubectl create secret gesneric consul \
--from-literal=&quot;gossip-encryption-key=${GOSSIP_ENCRYPTION_KEY}&quot; \
--from-file=ca.pem \
--from-file=consul.pem \
--from-file=consul-key.pem</code></pre>
<!--kg-card-end: markdown--><p>3.	Deploy <strong>3</strong> Consul members (<strong>Statefulset</strong>)</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f consul/service.yaml
kubectl apply -f consul/rbac.yaml
kubectl apply -f consul/config.yaml
kubectl apply -f consul/consul.yaml</code></pre>
<!--kg-card-end: markdown--><p>4.	Prepare SSL certificates for Consul client, it will be used by vault consul client (sidecar).</p><!--kg-card-begin: markdown--><pre><code>cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca/ca-config.json \
-profile=default \
ca/consul-csr.json | cfssljson -bare client-vault</code></pre>
<!--kg-card-end: markdown--><p>5.	Create secret for Consul client (like members)</p><!--kg-card-begin: markdown--><pre><code>kubectl create secret generic client-vault \
--from-literal=&quot;gossip-encryption-key=${GOSSIP_ENCRYPTION_KEY}&quot; \
--from-file=ca.pem \
--from-file=client-vault.pem \
--from-file=client-vault-key.pem</code></pre>
<!--kg-card-end: markdown--><h2 id="3-vault-deployment-">3- Vault deployment :</h2><p>Before deploy Vault, you need to configure a Consul client to give Vault access to Consul members</p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: ConfigMap
metadata:
  name: vault-config
data:
  ...
  concul.config: |
    {
      &quot;verify_incoming&quot;: false,
      &quot;verify_outgoing&quot;: true,
      &quot;server&quot;: false,
      &quot;ca_file&quot;: &quot;/etc/tls/ca.pem&quot;,
      &quot;cert_file&quot;: &quot;/etc/tls/client-vault.pem&quot;,
      &quot;datacenter&quot;: &quot;vault&quot;,
      &quot;key_file&quot;: &quot;/etc/tls/client-vault-key.pem&quot;,
      &quot;client_addr&quot;: &quot;127.0.0.1&quot;,
      &quot;ui&quot;: false,
      &quot;raft_protocol&quot;: 3,
      &quot;retry_join&quot;: [ &quot;provider=k8s label_selector=\&quot;app=consul,role=server\&quot; namespace=\&quot;vault\&quot;&quot; ]
    }</code></pre>
<!--kg-card-end: markdown--><p>The consul client will be deployed as a Sidecar for Vault server, so the <em><strong>"client_addr"</strong></em> must be <em><strong>"127.0.0.1"</strong></em>. For certificates parameters, we will use the <em><strong>client-vault</strong></em> secret and same join expression of members configuration.</p><p>In another side, we need to configure Vault to request this client by the <em><strong>"listeners"</strong></em> parameter.</p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: ConfigMap
metadata:
  name: vault-config
data:
  vault.config: |
    {
      &quot;ui&quot;: true,
      &quot;listener&quot;: [{
        &quot;tcp&quot;: {
          &quot;address&quot;: &quot;0.0.0.0:8200&quot;,
          &quot;tls_disable&quot;: true
        }
      }],
...</code></pre>
<!--kg-card-end: markdown--><p>OK, let's deploy</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f vault/service.yaml
kubectl apply -f vault/config.yaml
kubectl apply -f vault/vault.yaml</code></pre>
<!--kg-card-end: markdown--><h2 id="5-ui-">5- UI:</h2><p>At this point, we have <strong>3</strong> instances on Consul deployed and <strong>1</strong> instance of Vault connected to Consul members.</p><p>We can use a port forwarding to acces UI of Consul and Vault. In our case, we will use an <em><strong>"Ingress"</strong></em> to expose UIs to internet.</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f ingress.yaml</code></pre>
<!--kg-card-end: markdown--><blockquote>If you use this option with SSL (HTTPS), you need to configure the TLS secret.</blockquote><h2 id="6-vault-injector-deployment">6- Vault Injector deployment</h2><ul><li>Install vault agent injector (single and simple instance without leader &amp; leader election)</li></ul><!--kg-card-begin: markdown--><pre><code>kubectl apply -f vault-injector/serivce.yaml
kubectl apply -f vault-injector/rbac.yaml
kubectl apply -f vault-injector/deployment.yaml
kubectl apply -f vault-injector/webhook.yaml # webhook must be created after deployment</code></pre>
<!--kg-card-end: markdown--><blockquote>The injector will detect Vault <em><strong>"Annotation"</strong></em> or <em><strong>"Configmap"</strong></em>, and will inject an <em><strong>initContainer</strong></em> in the init process of your application Pod to request Vault server for secret. After initialization, an agent will be injected inside the pod to give your application container the requested secret.</blockquote><blockquote>For agent injector, we will use our <a href="https://hub.docker.com/repository/docker/medinvention/kubernetes-vault" rel="nofollow">docker image</a>, it's similar of official image with arm64 supporting (At 02/2021 only amd64 arch are distributed by Hashicorp), see <a href="https://github.com/mmohamed/vault-kubernetes/blob/main/docker">docker files</a>.</blockquote><h2 id="7-sample-deployment-">7- Sample deployment :</h2><p>We can use UI to configure and use Vault, in this project we use CLI.</p><p>1.	Start by installing Vault locally (in workspace) for CLI use only</p><!--kg-card-begin: markdown--><pre><code>curl https://releases.hashicorp.com/vault/1.6.2/vault_1.6.2_linux_amd64.zip -o vault_1.6.2_linux_amd64.zip
unzip vault_1.6.2_linux_amd64.zip
chmod +x vault
# With ingress, you can use the root url of Vault ui, or use the port forward
export VAULT_ADDR=&quot;YOUR_VAULT_HOST&quot;</code></pre>
<!--kg-card-end: markdown--><p>2.	Check the server status and login (using token like UI)</p><!--kg-card-begin: markdown--><pre><code>./vault status
&gt;&gt; Key             Value
&gt;&gt; ---             -----
&gt;&gt; Seal Type       shamir
&gt;&gt; Initialized     true
&gt;&gt; Sealed          false
&gt;&gt; Total Shares    1
&gt;&gt; Threshold       1
&gt;&gt; Version         1.6.2
&gt;&gt; Storage Type    consul
&gt;&gt; Cluster Name    vault-cluster-...
&gt;&gt; Cluster ID      ...
&gt;&gt; HA Enabled      true
&gt;&gt; HA Cluster      ..
&gt;&gt; HA Mode         active

./vault login
&lt;&lt; your_token</code></pre>
<!--kg-card-end: markdown--><p>3.	Create key/value for testing</p><!--kg-card-begin: markdown--><pre><code>./vault secrets enable kv
./vault kv put kv/myapp/config username=&quot;admin&quot; password=&quot;adminpassword&quot;</code></pre>
<!--kg-card-end: markdown--><p>4.	Connect Kube to Vault</p><!--kg-card-begin: markdown--><pre><code># Create the service account to access secret
kubectl apply -f myapp/service-account.yaml
# Enable kubernetes support
./vault auth enable kubernetes
# Prepare kube api server data
export SECRET_NAME=&quot;$(kubectl get serviceaccount vault-auth  -o go-template='{{ (index .secrets 0).name }}')&quot;
export TR_ACCOUNT_TOKEN=&quot;$(kubectl get secret ${SECRET_NAME} -o go-template='{{ .data.token }}' | base64 --decode)&quot;
export K8S_API_SERVER=&quot;$(kubectl config view --raw -o go-template=&quot;{{ range .clusters }}{{ index .cluster \&quot;server\&quot; }}{{ end }}&quot;)&quot;
export K8S_CACERT=&quot;$(kubectl config view --raw -o go-template=&quot;{{ range .clusters }}{{ index .cluster \&quot;certificate-authority-data\&quot; }}{{ end }}&quot; | base64 --decode)&quot;
# Send kube config to vault
./vault write auth/kubernetes/config kubernetes_host=&quot;${K8S_API_SERVER}&quot; kubernetes_ca_cert=&quot;${K8S_CACERT}&quot; token_reviewer_jwt=&quot;${TR_ACCOUNT_TOKEN}&quot;</code></pre>
<!--kg-card-end: markdown--><p>5.	Create Vault policy and role for "myapp"</p><p>Edit policy file myapp/policy.json</p><!--kg-card-begin: markdown--><pre><code>path &quot;kv/myapp/*&quot; {
  capabilities = [&quot;read&quot;, &quot;list&quot;]
}</code></pre>
<!--kg-card-end: markdown--><p>Create the application role</p><!--kg-card-begin: markdown--><pre><code>./vault policy write myapp-ro myapp/policy.json
./vault write auth/kubernetes/role/myapp-role bound_service_account_names=vault-auth bound_service_account_namespaces=vault policies=default,myapp-ro ttl=15m</code></pre>
<!--kg-card-end: markdown--><p>6.	Deploy "myapp" for testing</p><p>Edit annotations for secret output, <a href="https://github.com/mmohamed/vault-kubernetes/blob/main/myapp/deployment.yaml">@see myapp/deployment.yaml</a></p><!--kg-card-begin: markdown--><pre><code>annotations:
    vault.hashicorp.com/agent-inject: &quot;true&quot;
    vault.hashicorp.com/agent-inject-secret-account: &quot;kv/myapp/config&quot;
    vault.hashicorp.com/agent-inject-template-account: |
        {{- with secret &quot;kv/myapp/config&quot; -}}
        dsn://{{ .Data.username }}:{{ .Data.password }}@database:port/mydb?sslmode=disable
        {{- end }}
    vault.hashicorp.com/role: &quot;myapp-role&quot;</code></pre>
<!--kg-card-end: markdown--><p>Deploy app and log secret</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f myapp/deployment.yaml 
export POD=$(kubectl get pods --selector=app=myapp --output=jsonpath={.items..metadata.name})
kubectl log ${POD} myapp</code></pre>
<!--kg-card-end: markdown--><h2 id="8-tips">8- Tips</h2><p>To activate an HTTP Basic security for Consul UI (it's run without), you can use Nginx ingress annotations, after authentication secret generation.</p><!--kg-card-begin: markdown--><pre><code>htpasswd -c auth foo
kubectl create secret generic consul-auth --from-file=auth</code></pre>
<!--kg-card-end: markdown--><p>Add annotations</p><!--kg-card-begin: markdown--><pre><code>nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: consul-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - Consul - MedInvention'
</code></pre>
<!--kg-card-end: markdown--><h2 id="links">Links</h2><ul><li><a href="https://github.com/kelseyhightower/consul-on-kubernetes">https://github.com/kelseyhightower/consul-on-kubernetes</a></li><li><a href="https://github.com/hashicorp/vault-k8s">https://github.com/hashicorp/vault-k8s</a></li><li><a href="https://www.hashicorp.com/blog/whats-next-for-vault-and-kubernetes" rel="nofollow">https://www.hashicorp.com/blog/whats-next-for-vault-and-kubernetes</a></li><li><a href="https://github.com/hashicorp/vault-helm">https://github.com/hashicorp/vault-helm</a></li><li><a href="https://medium.com/hashicorp-engineering/hashicorp-vault-delivering-secrets-with-kubernetes-1b358c03b2a3" rel="nofollow">https://medium.com/hashicorp-engineering/hashicorp-vault-delivering-secrets-with-kubernetes-1b358c03b2a3</a></li><li><a href="https://github.com/hashicorp/consul">https://github.com/hashicorp/consul</a></li><li><a href="https://blog.zwindler.fr/2020/08/31/gerez-vos-secrets-kubernetes-dans-vault/" rel="nofollow">https://blog.zwindler.fr/2020/08/31/gerez-vos-secrets-kubernetes-dans-vault/</a></li></ul><hr><p><em><a href="https://github.com/mmohamed/vault-kubernetes">Source @GitHub</a></em></p>]]></content:encoded></item><item><title><![CDATA[Simple way to secure Kafka / Zookeeper cluster on Kubernetes]]></title><description><![CDATA[After successful Kafka and Zookeeper deployment on Kubernetes following this post, it's time to make it more secure.]]></description><link>https://blog.medinvention.dev/simple-way-to-secure-kafka-zookeeper-cluster-on-kubernetes/</link><guid isPermaLink="false">6029b0492076420001b321fc</guid><category><![CDATA[kafka]]></category><category><![CDATA[zookeeper]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Mon, 15 Feb 2021 14:01:51 GMT</pubDate><media:content url="https://blog.medinvention.dev/content/images/2021/02/Kafka---Zookeeper-on-K8S--2-.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.medinvention.dev/content/images/2021/02/Kafka---Zookeeper-on-K8S--2-.png" alt="Simple way to secure Kafka / Zookeeper cluster on Kubernetes"><p>After successful Kafka and Zookeeper deployment on Kubernetes following <a href="https://blog.medinvention.dev/kafka-zookeeper-kubernetes-deployment">this post</a>, it's time to make it more secure.</p><h2 id="1-zookeeper-digest-authentication">1- ZooKeeper DIGEST authentication</h2><h3 id="account-configuration">Account configuration </h3><p><em>(one for Zookeeper nodes, another for Kafka broker) in <a href="https://github.com/mmohamed/kafka-kubernetes/blob/master/zookeeper/config.secured.yaml">config.secured.yaml</a> file.</em></p><p>Client section will be used by zkCli.sh helper srcipt to test configuration. In integration environement, Client section must be removed.</p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: Secret
metadata:
  name: zookeeper-jaas
type: Opaque
stringData:
  zookeeper-jaas.conf: |-
    QuorumServer {
          org.apache.zookeeper.server.auth.DigestLoginModule required
          user_zk=&quot;passcode&quot;;
    };
    QuorumLearner {
          org.apache.zookeeper.server.auth.DigestLoginModule required
          username=&quot;zk&quot;
          password=&quot;passcode&quot;;
    }; 
    Server {
          org.apache.zookeeper.server.auth.DigestLoginModule required
          user_kafka=&quot;passcode&quot;
          user_client=&quot;passcode&quot;;
    };
    Client {
          org.apache.zookeeper.server.auth.DigestLoginModule required
          username=&quot;client&quot;
          password=&quot;passcode&quot;;
    };</code></pre>
<!--kg-card-end: markdown--><h3 id="create-secret-configmap-and-deploy-statefuset-">Create Secret / ConfigMap and deploy StatefuSet:</h3><!--kg-card-begin: markdown--><pre><code>kubectl apply -f zookeeper/config.secured.yaml 
kubectl apply -f zookeeper/statefulset.secured.yaml 
# Secret for kafka
kubectl apply -f kafka/config.secured.yaml</code></pre>
<!--kg-card-end: markdown--><h2 id="2-kafka-ssl-encryption">2- Kafka SSL encryption</h2><p>If your want to disable the Host Name verification for some raison, you need to omit the extension parameter <em>-ext</em> and add empty environment variable to Kafka broker <code>KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=""</code> or use Kafka helper script for every broker.</p><!--kg-card-begin: markdown--><pre><code>kafka-configs.sh --bootstrap-server [kafka-0].kafka-broker.kafka.svc.cluster.local:9092 --entity-type brokers --entity-name 0 --alter --add-config &quot;listener.name.internal.ssl.endpoint.identification.algorithm=&quot;</code></pre>
<!--kg-card-end: markdown--><h3 id="create-ca-certificate-authority-">Create CA (certificate authority)</h3><!--kg-card-begin: markdown--><pre><code>openssl req -new -x509 -keyout ca-key -out ca-cert -days 365</code></pre>
<!--kg-card-end: markdown--><h3 id="generate-server-keystore-and-client-keystore">Generate server keystore and client keystore</h3><!--kg-card-begin: markdown--><pre><code>keytool -keystore kafka.server.keystore.jks -alias localhost -validity 365 -genkey -keyalg RSA 
keytool -keystore kafka.client.keystore.jks -alias localhost -validity 365 -genkey -keyalg RSA</code></pre>
<!--kg-card-end: markdown--><h3 id="add-generated-ca-to-the-trust-store">Add generated CA to the trust store</h3><!--kg-card-begin: markdown--><pre><code>keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert 
keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert</code></pre>
<!--kg-card-end: markdown--><h3 id="sign-the-key-store-with-passcode-and-ssl-cnf-">Sign the key store (with passcode and ssl.cnf)</h3><p>You need to update <strong>alt_names </strong>section of <strong>ssl.cnf</strong> with list of your brokers hostname.</p><!--kg-card-begin: markdown--><pre><code>keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file cert-file 
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:passcode -extfile ssl.cnf -extensions req_ext 
keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert 
keytool -keystore kafka.server.keystore.jks -alias localhost -import -file cert-signed</code></pre>
<!--kg-card-end: markdown--><h3 id="sign-the-client-keystore">Sign the client keystore</h3><!--kg-card-begin: markdown--><pre><code>keytool -keystore kafka.client.keystore.jks -alias localhost -certreq -file cert-file-client 
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-client -out cert-signed-client -days 365 -CAcreateserial -passin pass:passcode -extfile ssl.cnf -extensions req_ext 
keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert 
keytool -keystore kafka.client.keystore.jks -alias localhost -import -file cert-signed-client</code></pre>
<!--kg-card-end: markdown--><h3 id="kafka-ssl-kubernetes">Kafka SSL Kubernetes </h3><p>Create kubernetes secret from kafka.keystore.jks and kafka.truststore.jks</p><!--kg-card-begin: markdown--><pre><code>kubectl create secret generic ssl --from-literal=keystore_password=passcode --from-file=kafka.keystore.jks=ssl/kafka.server.keystore.jks --from-literal=truststore_password=passcode --from-file=kafka.truststore.jks=ssl/kafka.server.truststore.jks</code></pre>
<!--kg-card-end: markdown--><p>Update Kafka StatefulSet to mount ssl secret and broker, service configuration:</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f kafka/statefulset.ssl.yaml kubectl apply -f kafka/service.ssl.yaml</code></pre>
<!--kg-card-end: markdown--><h3 id="testing">Testing</h3><p>Use openssl to debug connectionto valid certificate data:</p><!--kg-card-begin: markdown--><pre><code>kubectl exec -ti zk-0 -- openssl s_client -debug -connect kafka-0.kafka-broker.kafka.svc.cluster.local:9093 -tls1</code></pre>
<!--kg-card-end: markdown--><p>Create client configuration file (client.properties):</p><!--kg-card-begin: markdown--><pre><code>security.protocol=SSL
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
ssl.truststore.password=passcode
ssl.keystore.location=/opt/kafka/config/client.keystore.jks
ssl.keystore.password=passcode ssl.key.password=passcode
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1</code></pre>
<!--kg-card-end: markdown--><p>Send to test pod:</p><!--kg-card-begin: markdown--><pre><code>mkdir -p config 
cp client.properties config/client.properties 
cp ssl/kafka.client.keystore.jks config/client.keystore.jks 
cp ssl/kafka.client.truststore.jks config/kafka.truststore.jks 
kubectl cp config kafka-1:/opt/kafka</code></pre>
<!--kg-card-end: markdown--><p>Run test:</p><!--kg-card-begin: markdown--><pre><code>kubectl exec -ti kafka-1 -- kafka-console-producer.sh --bootstrap-server kafka-0.kafka-broker.kafka.svc.cluster.local:9093 --topic k8s --producer.config /opt/kafka/config/client.properties 
&gt;&gt; Hello with secured connection

kubectl exec -ti kafka-1 -- kafka-console-consumer.sh --bootstrap-server kafka-0.kafka-broker.kafka.svc.cluster.local:9093 --topic k8s -consumer.config /opt/kafka/config/client.properties --from-beginning 
&lt;&lt; Hello with secured connection</code></pre>
<!--kg-card-end: markdown--><p>Prepare Secret for client:</p><!--kg-card-begin: markdown--><pre><code>kubectl create secret generic client-ssl --from-file=ca-certs.pem=ssl/ca-cert --from-file=cert.pem=ssl/cert-signed-client 
kubectl apply -f consumer.secured.yaml kubectl logs consumer-secured</code></pre>
<!--kg-card-end: markdown--><p></p><h3 id="sources-links">Sources &amp; Links</h3><ul><li><a href="https://access.redhat.com/documentation/en-us/red_hat_amq/7.2/html/using_amq_streams_on_red_hat_enterprise_linux_rhel/configuring_kafka" rel="nofollow">Redhat-Kafka</a></li><li><a href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/Server-Server+mutual+authentication" rel="nofollow">Confluence-Zookeeper</a></li><li><a href="https://kafka.apache.org/documentation/#security_overview" rel="nofollow">Apache-Kafka</a></li><li><a href="https://github.com/bitnami/charts/issues/1279">Bitnami-Kafka</a></li><li><a href="https://stackoverflow.com/questions/54903381/kafka-failed-authentication-due-to-ssl-handshake-failed" rel="nofollow">Kafka-SSL</a></li><li><a href="https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/KafkaIntegrationGuide/TLS-SSL/KafkaTLS-SSLExamplePart3ConfigureKafka.htm?tocpath=Integrating%20with%20Apache%20Kafka%7CUsing%20TLS%2FSSL%20Encryption%20with%20Kafka%7C_____7" rel="nofollow">Vertica-Kafka-SSL</a></li><li><a href="https://gist.github.com/anoopl/85d869f7a85a70c6c60542922fc314a8">Kafka-SSL</a></li></ul><p></p><hr><p><a href="https://github.com/mmohamed/kafka-kubernetes">@GitHub source</a></p>]]></content:encoded></item><item><title><![CDATA[Kafka & Zookeeper Kubernetes Deployment]]></title><description><![CDATA[Following this post, you will be able to deploy, configure and use an Apache Kafka event streaming platform with Apache Zookeeper , for your integration and development environment easily.]]></description><link>https://blog.medinvention.dev/kafka-zookeeper-kubernetes-deployment/</link><guid isPermaLink="false">6029aacf2076420001b32185</guid><category><![CDATA[kafka]]></category><category><![CDATA[zookeeper]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Mon, 15 Feb 2021 14:01:42 GMT</pubDate><media:content url="https://blog.medinvention.dev/content/images/2021/02/image-2-3.png" medium="image"/><content:encoded><![CDATA[<h1></h1><img src="https://blog.medinvention.dev/content/images/2021/02/image-2-3.png" alt="Kafka & Zookeeper Kubernetes Deployment"><p>Following this post, you will be able to deploy, configure and use an <a href="https://kafka.apache.org/" rel="nofollow">Apache Kafka</a> event streaming platform with <a href="https://zookeeper.apache.org/" rel="nofollow">Apache Zookeeper</a> , for your integration and development environment easily.</p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2021/02/Apache-Kafka.png" class="kg-image" alt="Kafka & Zookeeper Kubernetes Deployment"></figure><blockquote>Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.</blockquote><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2021/02/Apache-ZooKeeper.png" class="kg-image" alt="Kafka & Zookeeper Kubernetes Deployment"></figure><blockquote>ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.</blockquote><h2 id="1-stack">1- Stack</h2><ul><li>Kubelet : <strong>v1.17.2 / v1.18.5</strong></li><li>Kubectl : <strong>v1.17.1</strong></li><li>Docker : <strong>19.03.5 / 19.03.8</strong></li><li>Zookeeper : <strong>3.4.10</strong></li><li>Kafka : <strong>2.7.0 (Scala 2.13 / Glib 2.31-r0)</strong></li><li>Kube namespace : <strong>kafka</strong> <em>(if you use a different namespace, it must be changed in service and pod hostnames)</em></li><li>Architecture : <strong>AMD64 / ARM64</strong></li><li>Python (optional, for client testing) : <strong>3.8</strong></li></ul><h2 id="2-zookeeper-deployment">2- Zookeeper deployment</h2><p>First, deploy a small Zookeeper cluster (2 pods) using a <a href="https://github.com/mmohamed/kafka-kubernetes/blob/master/zookeeper/statefulset.yaml">StatefulSet</a> and exposing it with 2 <a href="https://github.com/mmohamed/kafka-kubernetes/blob/master/zookeeper/service.yaml">Services</a>, one for client communication and another for Zookeeper cluster communication (leader election).</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f zookeeper/statefulset.yaml kubectl apply -f zookeeper/service.yaml</code></pre>
<!--kg-card-end: markdown--><p>Next, you can test your deployment :</p><!--kg-card-begin: markdown--><pre><code>kubectl exec zk-0 zkCli.sh create /hello world kubectl exec zk-1 zkCli.sh get /hello</code></pre>
<!--kg-card-end: markdown--><p>For more information, take a tour in the <a href="https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/" rel="nofollow">kubernetes blog</a> .</p><h2 id="3-consumer-producer-application-case-">3- Consumer/Producer application case :</h2><p>You need to deploy a Kafka broker with ZooKeeper as synchronized services :</p><ol><li>Create 2 Kafka broker with <a href="https://github.com/mmohamed/kafka-kubernetes/blob/master/kafka/statefulset.yaml">StatefulSet</a></li><li>Create first topic (k8s for example), you can use one of available broker hostname or the broker service hostname :</li></ol><p>		- kafka-0.kafka-broker.kafka.svc.cluster.local</p><p>		- kafka-1.kafka-broker.kafka.svc.cluster.local</p><p>		- kafka-broker.kafka.svc.cluster.local</p><p>Next, create the first topic and run the first consumer client to check configuration.</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f service.yaml kubectl apply -f statefulset.yaml kubectl exec -ti kafka-0 -- kafka-topics.sh --create --topic=k8s --bootstrap-server kafka-0.kafka-broker.kafka.svc.cluster.local:9092 
kubectl apply -f consumer.yaml kubectl logs consumer</code></pre>
<!--kg-card-end: markdown--><h2 id="4-development-case-from-workstation-with-kubectl-">4- Development case (from Workstation with kubectl)</h2><p>You need to create a custom broker (for host binding) and activate a port forwarding to your workstation, and finally create a development topic :</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f dev-brocker.yaml kubectl port-forward pod/dev-brocker 9092:9092 
kubectl exec -ti dev-brocker -- kafka-topics.sh --create --topic dev-k8s --bootstrap-server 127.0.0.1:9092</code></pre>
<!--kg-card-end: markdown--><ul><li>Running python consumer and producer :</li></ul><!--kg-card-begin: markdown--><pre><code>pip install kafka-python python ../client/Consumer.py python ../client/Producer.py</code></pre>
<!--kg-card-end: markdown--><ul><li>Using Kafka help script client</li></ul><!--kg-card-begin: markdown--><pre><code>kubectl exec -ti dev-brocker -- kafka-console-producer.sh --topic=dev-k8s --bootstrap-server 127.0.0.1:9092 
&gt;&gt; Hello World!
&gt;&gt; I'm a Producer
kubectl exec -ti dev-brocker -- kafka-console-consumer.sh --topic=k8s --from-beginning --bootstrap-server 127.0.0.1:9092
&lt;&lt; Hello World!
&lt;&lt; I'm a Producer</code></pre>
<!--kg-card-end: markdown--><h2 id="5-secure-your-kafka">5- Secure your Kafka</h2><p>With a standard Kafka setup, any user or application can write any messages to any topic. It's the same for Zookeeper. So, we need to add a DIGEST authentication layer to Zookeeper (doesn’t support ACL, but we have only Kafka broker as client, DIGEST is sufficient) to authorize only Kafka broker. In Kafka Side we need to add SSL authentication to authorize ony valid client to use services.</p><p>Follow the <a href="https://blog.medinvention.dev/how-secure-kafka-cluster-with-kubernetes">security section documentation</a></p><h2 id="6-sourcing">6- Sourcing</h2><ul><li>Zookeeper Docker image : we use the <a href="https://github.com/kow3ns/kubernetes-zookeeper">kubernetes-zookeeper @kow3ns</a> as base image with 2 modifications:</li></ul><p>			- Add JVM flags to be injected in Java environment file <a href="https://github.com/mmohamed/kafka-kubernetes/blob/master/zookeeper/docker/scripts/start-zookeeper">@see start-zookeeper.sh</a></p><!--kg-card-begin: markdown--><pre><code>echo &quot;JVMFLAGS=\&quot;-Xmx$HEAP -Xms$HEAP $JVMFLAGS\&quot;&quot; &gt;&gt; $JAVA_ENV_FILE</code></pre>
<!--kg-card-end: markdown--><p>			- Extra configuration file path, to be injected in top on configuration file <a href="https://github.com/mmohamed/kafka-kubernetes/blob/master/zookeeper/docker/scripts/start-zookeeper">@see start-zookeeper.sh</a></p><!--kg-card-begin: markdown--><pre><code>#Add extra configuration from file (file path in env : EXTRA_CONFIG_FILE) 
if [[ -n &quot;$EXTRA_CONFIG_FILE&quot; ]]; then    
    echo &quot;#Start extra-section&quot; &gt;&gt; $CONFIG_FILE    
    cat $EXTRA_CONFIG_FILE &gt;&gt; $CONFIG_FILE    
    echo &quot;#End of extra-section&quot; &gt;&gt; $CONFIG_FILE
fi</code></pre>
<!--kg-card-end: markdown--><ul><li>Kafka Docker image : we use the <a href="https://github.com/wurstmeister/kafka-docker">kafka-docker @wurstmeister</a> as base with 2 modifications :</li></ul><p>			- For ARM64 arch, switching base image from 'openjdk:8u212-jre-alpine' to 'openjdk:8u201-jre-alpine' to prevent container core dump <a href="https://github.com/openhab/openhab-docker/issues/233">@see issue</a>.</p><p>			- For K8S deployment, add a 'KAFKA_LISTENERS_COMMAND' environment parameter to build 'KAFKA_LISTENERS' on fly (to use pod hostname when container started) <a href="https://github.com/mmohamed/kafka-kubernetes/blob/master/kafka/docker/start-kafka.sh">@see start-kafka.sh</a></p><pre><code>if [[ -n "$KAFKA_LISTENERS_COMMAND" ]]; then
    KAFKA_LISTENERS=$(eval "$KAFKA_LISTENERS_COMMAND")
    export KAFKA_LISTENERS
    unset KAFKA_LISTENERS_COMMAND
fi
</code></pre><h2 id="7-tips">7- Tips</h2><ul><li>For debugging, you can bypass the Kafka broker for topics management (kafka and ZooKeeper helpers script) :</li></ul><!--kg-card-begin: markdown--><pre><code>kubectl exec -ti kafka-0 -- kafka-topics.sh --create --topic k8s --zookeeper zk-cs.kafka.svc.cluster.local:2181 
kubectl exec -ti kafka-0 -- kafka-topics.sh --describe --topic k8s --zookeeper zk-cs.kafka.svc.cluster.local:2181 
kubectl exec zk-1 zkCli.sh ls /brokers/topics</code></pre>
<!--kg-card-end: markdown--><ul><li>Building multi-architecture docker image :</li></ul><!--kg-card-begin: markdown--><pre><code>docker buildx build --push --platform linux/arm64/v8,linux/amd64 --tag [medinvention]/kubernetes-zookeeper:latest .
docker buildx build --push --platform linux/arm64/v8,linux/amd64 --tag [medinvention]/kafka:latest .</code></pre>
<!--kg-card-end: markdown--><hr><p><a href="https://github.com/mmohamed/kafka-kubernetes">@GitHub source</a></p>]]></content:encoded></item><item><title><![CDATA[Very Simple Mesh Service]]></title><description><![CDATA[<p></p><h3 id="let-s-start-by-a-definition-what-s-a-mesh-service">Let's start by a definition, what's a "Mesh Service" ?</h3><p></p><blockquote><a href="https://www.redhat.com/en/topics/microservices/what-is-istio">.</a>..is a way to control how different parts of an application share data with one another. Unlike other systems for managing this communication, a service mesh is a dedicated infrastructure layer built right into an app. This visible infrastructure layer can</blockquote>]]></description><link>https://blog.medinvention.dev/simple-mesh-service/</link><guid isPermaLink="false">5eeb6a7a91b1870001334390</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[mesh-service]]></category><category><![CDATA[armv7]]></category><category><![CDATA[amd64]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Tue, 23 Jun 2020 14:51:32 GMT</pubDate><media:content url="https://blog.medinvention.dev/content/images/2020/06/Cover.PNG" medium="image"/><content:encoded><![CDATA[<img src="https://blog.medinvention.dev/content/images/2020/06/Cover.PNG" alt="Very Simple Mesh Service"><p></p><h3 id="let-s-start-by-a-definition-what-s-a-mesh-service">Let's start by a definition, what's a "Mesh Service" ?</h3><p></p><blockquote><a href="https://www.redhat.com/en/topics/microservices/what-is-istio">.</a>..is a way to control how different parts of an application share data with one another. Unlike other systems for managing this communication, a service mesh is a dedicated infrastructure layer built right into an app. This visible infrastructure layer can document how well (or not) different parts of an app interact, so it becomes easier to optimize communication and avoid downtime as an app grows...</blockquote><hr><h3 id="context">Context</h3><p>Imagine, you have an application built on many services calls, and you want to know h<strong>ow services communicate</strong> inside your cluster, you want to <strong>monitor your request/response time</strong> and verify your <strong>request tracing and routing</strong>, you need a mesh service tools.</p><p>There are several solution who offering many features, but all this popular tools need a complex configuration to work and redefining of yours deployments and it's not available for ARM architecture.</p><p>So i purpose to you <strong>"SMS"</strong>, it's a very simple mesh service for Kubernetes and dedicated to Rest Service (but can work with all HTTP services) and configured to work on <strong>ARM </strong>(like Raspberry ARMv7) and <strong>Amd64 </strong>.</p><hr><h2 id="simple-mesh-service-install">Simple Mesh Service - Install</h2><p>Simple way , with Helm package :</p><!--kg-card-begin: markdown--><pre><code>helm repo add medinvention-dev https://mmohamed.github.io/kubernetes-charts</code></pre>
<!--kg-card-end: markdown--><p>Create your own values file "myvalues.yaml" (optional) :</p><blockquote><strong><em>Caution: Helm package is configured by default to work on ARM architecture, if you want to use it on AMD architecture (AWS, GCP, ...), you must add to all release option "-amd64"</em></strong></blockquote><!--kg-card-begin: markdown--><pre><code>db:
  reuse: false # delete db on install or update
  release: 5.6 # 5.6-amd64

api:
  release: v0.1.0 # v0.1.0-amd64
  ingress:
    host: your ingress host for api...

ui:
  release: v0.1.0 # v0.1.0-amd64
  ingress:
    host: your ingress host for ui...
    
sidecar:
  release: v0.1.0 # v0.1.0-amd64

logger:
  release: v0.1.0 # v0.1.0-amd64

master:
  release: v0.1.0 # v0.1.0-amd64

controller:
  release: v0.1.0 # v0.1.0-amd64

processor:  
  release: v0.1.0 # v0.1.0-amd64
</code></pre>
<!--kg-card-end: markdown--><p>Install (you can create your namespace before, for example kube-sms)</p><!--kg-card-begin: markdown--><pre><code>helm install -f myvalues.yaml kube-sms -n kube-sms medinvention-dev/sms</code></pre>
<!--kg-card-end: markdown--><p>Now, you can view your dashboard on accessing web view (if you have configure an ingress for ui/api) with your credentials defined in your values file (default: admin@sms.dev /admin)</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.medinvention.dev/content/images/2020/06/EmptyDashboard.png" class="kg-image" alt="Very Simple Mesh Service"></figure><p>For testing, you can use sample files (from <a href="https://github.com/mmohamed/k8s-sms/tree/v0.1.0/Sample">source</a>) to test your install</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f /Sample</code></pre>
<!--kg-card-end: markdown--><p>You will see your 4 services groups without any connetion</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.medinvention.dev/content/images/2020/06/StartDashboard.png" class="kg-image" alt="Very Simple Mesh Service"></figure><p>So, to see how it's work, you must execute same request on services, you can use this simple script (inside cluster, from any container)</p><!--kg-card-begin: markdown--><pre><code>for i in {1..50}
do
    curl http://product-service.sample-sms.svc.cluster.local:5000/v3
    curl http://product-service.sample-sms.svc.cluster.local:5000/v2
    curl http://product-service.sample-sms.svc.cluster.local:5000/v1
    curl http://product-service.sample-sms.svc.cluster.local:5000
    curl http://product-service.sample-sms.svc.cluster.local:5000/notfound
done</code></pre>
<!--kg-card-end: markdown--><p>After max 5 minutes (processor run every 5 min), you can see how your services communicate like this </p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.medinvention.dev/content/images/2020/06/sms-3.gif" class="kg-image" alt="Very Simple Mesh Service"></figure><p>If you have any problem during deployment, you can see logs</p><!--kg-card-begin: markdown--><pre><code>kubectl logs -n kube-sms -l &quot;run=controller,app.kubernetes.io/name=SMS,app.kubernetes.io/instance=kube-sms&quot;
kubectl logs -n kube-sms -l &quot;run=master,app.kubernetes.io/name=SMS,app.kubernetes.io/instance=kube-sms&quot;
kubectl logs -n kube-sms -l &quot;run=logger,app.kubernetes.io/name=SMS,app.kubernetes.io/instance=kube-sms&quot;
kubectl logs -n kube-sms -l &quot;run=processor,app.kubernetes.io/name=SMS,app.kubernetes.io/instance=kube-sms&quot;
kubectl logs -n kube-sms -l &quot;run=db,app.kubernetes.io/name=SMS,app.kubernetes.io/instance=kube-sms&quot;
kubectl logs -n kube-sms -l &quot;run=api,app.kubernetes.io/name=SMS,app.kubernetes.io/instance=kube-sms&quot;
</code></pre>
<!--kg-card-end: markdown--><hr><h2 id="simple-mesh-service-how-to">Simple Mesh Service - How to</h2><p>Let's see the V1 of the Review deployment (it's a part of sample) :</p><!--kg-card-begin: markdown--><pre><code>...

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-review-v1
  namespace: sample-sms
  annotations:
    medinvention.dev/sms.group: review
    medinvention.dev/sms.port: &quot;5000&quot;
    medinvention.dev/sms.service: review-v1-service
spec:
  selector:
    matchLabels:
      app: review-v1
  replicas: 1
  template:
    metadata:
      labels:
        app: review-v1
    spec:
      containers:
      - name: server
        image: python:3
        command: ['sh', '-c', 'pip install flask &amp;&amp; python /var/static/server']
        ports:
          - containerPort: 5000
        volumeMounts:
          - name: review-v1-server
            mountPath: /var/static
      volumes:
        - name: review-v1-server
          configMap:
            name: review-v1-server
...
</code></pre>
<!--kg-card-end: markdown--><p>For our sample we need just 3 annotations : </p><!--kg-card-begin: markdown--><pre><code>medinvention.dev/sms.group: review</code></pre>
<!--kg-card-end: markdown--><p>It's to specify the group of service, for Review services we have 3 version (v1, v2 and v3): we type "review" as group unique name, and they will be used to associate deployment to the services group. Finally it's will be the title in "Review bloc" header in Dashboard interface.</p><!--kg-card-begin: markdown--><pre><code>medinvention.dev/sms.port: &quot;5000&quot;</code></pre>
<!--kg-card-end: markdown--><p> It's a container port who expose your service. In this sample it's 5000 et it will be accepting HTTP request only. This annotation it's not mandatory and by default, 80 will be used.</p><!--kg-card-begin: markdown--><pre><code>medinvention.dev/sms.service: review-v1-service</code></pre>
<!--kg-card-end: markdown--><p>it's the name of service that expose your container port, this annotation is optional for SMS but, you must use service if you want to use same port. For sample "review-v1-service" bind 80 to 5000, so after applying SMS the service will be updated for only target port and you can continue calling service on 80.</p><p>if service is defined in another namespace that deployment, you can use this annotation to specify it.</p><!--kg-card-begin: markdown--><pre><code>medinvention.dev/sms.servicenamespace: sample-sms</code></pre>
<!--kg-card-end: markdown--><p><strong>It's done, you can delete sample and start using SMS with your services :)</strong></p><hr><h2 id="simple-mesh-service-features">Simple Mesh Service - Features</h2><p></p><p>Using SMS Dashboard, </p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.medinvention.dev/content/images/2020/06/SampleDashboard.png" class="kg-image" alt="Very Simple Mesh Service"></figure><p>In Right pane, you can view some metadata of your services group, like his name, his status and how many services is contained.</p><p>You can have statistical data, like request and response Avg time for selected group and success/error request rate.</p><p>In bottom, you can see a request status statistic.</p><p>In Top bar view, you can filter by date or by namespace. And finally, this view is auto refreshed every 30 second, but you can force a refresh using reload button.</p><p></p><h2 id="enjoy">Enjoy</h2><p></p><p></p><p><em>Nota: this is a development release of SMS project <a href="https://github.com/mmohamed/k8s-sms">available here</a>, it's can have bugs and may doses not work correctly with very big services.</em></p>]]></content:encoded></item><item><title><![CDATA[K8S CPU Temperature & Fan monitoring for RPI]]></title><description><![CDATA[<p>Today, our goal is to deploy a simple application for cluster resources monitoring. Target resources will be CPU charge, Memory consuming, CPU temperature and pods state like in Top view on Linux system.</p><hr><h2 id="project-stack">Project stack</h2><p></p><h3 id="application">Application</h3><p>It's a Java Maven multi-module project, source code available <a href="https://github.com/mmohamed/k8s-monitoring">here</a>: </p><ul><li><strong>Core </strong>module for communication</li></ul>]]></description><link>https://blog.medinvention.dev/k8s-cpu-temperature-fan-monitoring-for-rpi/</link><guid isPermaLink="false">5e8b68ef91b1870001333e96</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[raspberry]]></category><category><![CDATA[gpio]]></category><category><![CDATA[flask]]></category><category><![CDATA[maven]]></category><category><![CDATA[springboot]]></category><category><![CDATA[reactjs]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Tue, 07 Apr 2020 10:59:18 GMT</pubDate><media:content url="https://blog.medinvention.dev/content/images/2020/04/FAN-CPU-Diagram-Full--6-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.medinvention.dev/content/images/2020/04/FAN-CPU-Diagram-Full--6-.jpg" alt="K8S CPU Temperature & Fan monitoring for RPI"><p>Today, our goal is to deploy a simple application for cluster resources monitoring. Target resources will be CPU charge, Memory consuming, CPU temperature and pods state like in Top view on Linux system.</p><hr><h2 id="project-stack">Project stack</h2><p></p><h3 id="application">Application</h3><p>It's a Java Maven multi-module project, source code available <a href="https://github.com/mmohamed/k8s-monitoring">here</a>: </p><ul><li><strong>Core </strong>module for communication with Cluster API server using <a href="https://github.com/kubernetes-client/java">Kubernetes Java Client</a>.</li><li><strong>Service </strong>module to expose Rest services using <a href="https://spring.io/projects/spring-boot">Spring Boot</a> and Spring Rest component.</li><li><strong>Webapp </strong>module to make a web application based on <a href="https://reactjs.org/">ReactJs Framework</a>  with <a href="https://material-ui.com/">Material-UI Framwork</a>.</li></ul><h3 id="metrics">Metrics</h3><p>With default install of Kube, we don't have some metrics like CPU and consumed Memory of node, so we will deploy a Metrics Server with our <a href="https://hub.docker.com/repository/docker/medinvention/metrics-server-arm">image</a> built for arm-V7 architecture. Metrics server deployment source code available <a href="https://github.com/mmohamed/k8s-raspberry/blob/master/kube/metrics.yaml">here</a>.</p><h3 id="temperature-sensor">Temperature sensor</h3><p>To track CPU Temperature of every node, we will deploy a very small pod and use the "/sys/class/thermal" of pod who be inherited from host.</p><h3 id="fan-commander">Fan commander</h3><p>To manage Fan, we will use GPIO utilities of <a href="https://pypi.org/project/RPi.GPIO/">Python RPi library</a>. </p><hr><h2 id="material">Material</h2><h3 id="fan">Fan</h3><p>We have a tower for all worker node (north, south, east and west) with a single fan connected to west node (because it's in the top of the tower)</p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/04/IMG_20200325_204830.jpg" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><h3 id="chipset">Chipset</h3><p>To manage fan, we will use a single chipset <a href="http://www.ti.com/lit/ds/symlink/l293.pdf">L293D</a> from Texas Instruments. This chipset allows us to pilot 2 motor with DC power (rotation direction and speed).</p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/04/L293D.jpg" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><p></p><p>it's done with requirements, let's start.</p><hr><h2 id="our-solution">Our Solution</h2><p></p><h3 id="global-view">Global view</h3><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/04/FAN-CPU-Diagram-Full.jpg" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"><figcaption>Global view</figcaption></figure><p>Our solution is composed by 2 section; logical and material section.</p><h3 id="logical-section-metrics-server">Logical section - Metrics Server</h3><p>Start by deploy it</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f https://raw.githubusercontent.com/mmohamed/k8s-raspberry/master/kube/metrics.yaml</code></pre>
<!--kg-card-end: markdown--><p>If success, you must have a new pod deployed</p><!--kg-card-begin: markdown--><pre><code>kube-system            metrics-server-5d74                    1/1     Running</code></pre>
<!--kg-card-end: markdown--><p></p><h3 id="logical-section-application">Logical section - Application</h3><p>For building and deployment, we will use Jenkins with a simple pipeline, more information available in this <a href="https://blog.medinvention.dev/devops-with-own-rpi-k8s/">post</a>.</p><p>To build application, we start by building the "webapp" module using Node, and the backend application using Maven to get an HTML web application and a Jar file.</p><p>For Frontend application, we will use a simple Nginx image and OpenJDK image for the backend.</p><p>Application can monitor 5 types of data; </p><p><strong>Minimal Cluster Health</strong></p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.medinvention.dev/content/images/2020/04/WebAppHeader.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><p>It's a minimal cluster health represented by the state of master node and will be appeared as badge of notification: it's can be "OK" or "KO".</p><p><strong>Pods</strong></p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.medinvention.dev/content/images/2020/04/WebAppBottom.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><p>We will use the Kubernetes Client Java to call Cluster API Server by calling "listPodForAllNamespaces" method for pods listing view.</p><p><strong>Nodes</strong></p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/04/WebAppNodes.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><p>Like Pods data, we use "listNode" method to get nodes static information.</p><p><strong>CPU &amp; Memory usages</strong></p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.medinvention.dev/content/images/2020/04/WebAppTop-1.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><p>This data is only available with Metrics Server deployed, but standard Kubernetes Client don't have method or data structure to call metrics services.</p><p>We have created an extension of standard client to add "V1Beta1NodeMetrics" for metrics data representation. Then we have extended the client "CoreV1Api" to add new method "clusterMetrics" that they call service of Metrics Server on "metrics.k8s.io/v1beta1/nodes" path.</p><p>Extension source code are available <a href="https://github.com/mmohamed/k8s-monitoring-core/tree/dev/src/main/java/dev/medinvention/core/api">here</a>. </p><p><strong>CPU Temperature</strong></p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.medinvention.dev/content/images/2020/04/WebAppCenter.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><p>Standard Cluster metrics don't have CPU temperature of nodes as data, so we need to deploy sensors to collect them. We use a DaementSet to deploy a set of Pod to collect and send temperature values to Backend application.</p><!--kg-card-begin: markdown--><pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: monitoring-agent
  namespace: monitoring
  labels:
    k8s-app: monitoring-agent
spec:
  selector:
    matchLabels:
      name: monitoring-agent
  template:
    metadata:
      labels:
        name: monitoring-agent
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: monitoring-agent
        image: busybox
        env:
          - name: NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: SERVER
            value: http://monitoring-service.monitoring.svc.cluster.local/k8s/collect/{{token}}/temperature
        command: [ &quot;sh&quot;, &quot;-c&quot;]
        args:
        - while true; do
            TEMP=$(cat /sys/class/thermal/thermal_zone0/temp);
            URL=&quot;$SERVER?node=$NODE&amp;value=$TEMP&quot;;
            wget -qO- $URL;
            sleep 5;
          done;
        imagePullPolicy: IfNotPresent</code></pre>
<!--kg-card-end: markdown--><p>For this sensor, we use a very small container (busybox) to make a continuous temperature collecting (every 5 seconds) and send it to Backend application using internal DNS service "monitoring-service.monitoring.svc.cluster.local" and a static security token "{{token}}" defined in the Backend app.</p><p>We have added a tolerance to deploy a replica of pod into "Master" node despite it's not schedulable.</p><!--kg-card-begin: markdown--><pre><code>monitoring-agent-6k7r4     1/1     Running   10.244.4.244   east
monitoring-agent-gzlc8     1/1     Running   10.244.2.181   north
monitoring-agent-hx9fx     1/1     Running   10.244.1.60    south
monitoring-agent-mtkf5     1/1     Running   10.244.3.28    west
monitoring-agent-znczc     1/1     Running   10.244.0.31    master</code></pre>
<!--kg-card-end: markdown--><p>This data will not be saved anywhere, but only last sent for each node will be stored in a "ConcurrentMap" memory variable, defined in the Backend app and will be sent to Frontend app on Rest service calls.</p><p><strong>Outside monitoring</strong></p><p>To view and monitor any other system (RPi or not)  from outside of K8S, you can use a small bash script with our application to send data</p><!--kg-card-begin: markdown--><pre><code>#!/bin/bash
# nohup sh agent.sh [NODE-NAME] [YOUR-SECURITY-TOKEN] &gt; /tmp/agent.log
if [ -z &quot;$1&quot; ]; then
    echo &quot;Node name required !&quot;
    exit 1
fi

if [ -z &quot;$2&quot; ]; then
    echo &quot;Security Token required !&quot;
    exit 1
fi

attempts=0
server=&quot;http[s]://[YOUR-API-BACKEN-URL]/k8s/collect/$1/temperature&quot;

while true; do

    temperature=$(cat /sys/class/thermal/thermal_zone0/temp)

    if [ $? != 0 ] || [ -z &quot;$temperature&quot; ]; then
        echo &quot;Unable to determinate CPU temperature value !&quot;
        exit 1
    fi

    url=&quot;$server?node=$2&amp;value=$temperature&quot;

    responseCode=$(curl --silent --output /dev/null --write-out &quot;%{http_code}&quot; $url)

    if [ $? != 0 ] || [ -z &quot;$responseCode&quot; ] || [ $responseCode -ne 200 ]; then
        attempts=$((attempts + 1))
        echo &quot;[ATTEMP-$attempts] Failed sending data to server : $responseCode&quot;
        if [ $attempts = 20 ]; then
            echo &quot;Server response error after 20 attempts !&quot;
            exit 1
        fi;
    else
        attempts=0	
    fi

    sleep 5
done;</code></pre>
<!--kg-card-end: markdown--><h3 id="logical-section-fan-monitoring">Logical section - Fan monitoring</h3><p>To monitor Fan, Backend app expose 3 services; start, stop and get status of Fan by calling a micro Rest server deployed directly into West node (not with K8S).</p><p>We have used Python RPi library with Flask library to communicate with GPIO of West node and to expose some services like on, off and status of Fan by calling GPIO devices.</p><p>Starting by Installing libraries</p><!--kg-card-begin: markdown--><pre><code>apt-get install rpi.gpio
sudo pip install Flask # use sudo to install Flask bin into PATH</code></pre>
<!--kg-card-end: markdown--><p>Server code source (replace NODE-FAN-IP by connected node to Fan)</p><!--kg-card-begin: markdown--><pre><code>from flask import Flask, jsonify
import RPi.GPIO as GPIO
import os, signal

GPIO.setmode(GPIO.BOARD)

IN1 = 11    # Input Pin 6
IN2 = 13    # Input Pin 7
ENABLE = 15 # Enable Pin 8

GPIO.setup(IN1,GPIO.OUT)
GPIO.setup(IN2,GPIO.OUT)
GPIO.setup(ENABLE,GPIO.OUT)

api = Flask(__name__)
api.config['SERVER_NAME'] = '[NODE-FAN-IP]:5000'

@api.route('/fan/status', methods=['GET'])
def status():
    status = GPIO.input(ENABLE) == GPIO.HIGH
    return jsonify({&quot;status&quot;: status, &quot;message&quot;: (&quot;FAN ON&quot; if status else &quot;FAN OFF&quot;)})

@api.route('/fan/start', methods=['GET'])
def start():
    GPIO.output(IN1,GPIO.HIGH)
    GPIO.output(IN2,GPIO.LOW)
    GPIO.output(ENABLE,GPIO.HIGH)
    return jsonify({&quot;status&quot;: True, &quot;message&quot;: &quot;FAN started&quot;})

@api.route('/fan/stop', methods=['GET'])
def stop():
    GPIO.output(IN1,GPIO.HIGH)
    GPIO.output(IN2,GPIO.LOW)
    GPIO.output(ENABLE,GPIO.LOW)
    return jsonify({&quot;status&quot;: True, &quot;message&quot;: &quot;FAN stopped&quot;})

@api.route('/server/shutdown', methods=['GET'])
def shutdown():
    stop()
    os.kill(os.getpid(), signal.SIGINT)
    # not sended
    return jsonify({&quot;status&quot;: true, &quot;message&quot;: &quot;Server is shutting down...&quot; })

if __name__ == '__main__':
    api.run()
    GPIO.cleanup()</code></pre>
<!--kg-card-end: markdown--><p>And run the server</p><!--kg-card-begin: markdown--><pre><code>nohup sudo python server.py &gt; /tmp/fan-server.log &amp;</code></pre>
<!--kg-card-end: markdown--><p>Now, we have a Fan server deployed and available only inside cluster (not exposed to outside of LAN). If you want deploying this server using a Pod, it's must privileged Pod (can access to host devices) and you must mount all GPIO devices (available into /dev) to pod with same name and path but it's not guaranteed to work.</p><p><strong>Server will be explained into "Material section".</strong></p><p><strong>Now, to say to Backend app where it's can pilot fan, we add an environment variable to Backend deployment "FAN_SERVER_URL" and it must be like "http://192.168.1.1:5000". In this way, the switch can make Fan on and off.</strong></p><p><strong>The second feature is to make our Backend app run in auto-mode to start Fan when maximal temperature is reached and stop Fan when minimal temperature is reached. To do this, we must specify a maximum value with an environment variable to Backend app "FAN_MAXTEMP", and Fan will be on when any node will have a temperature great than this value and will be off when the maximum temperature of all nodes will be less than 90% of this value.</strong></p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/04/WebAppFan.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><hr><h3 id="material-section-l293d">Material section - L293D</h3><p>To make fan manageable by RPI, we will use a L293D chipset</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/04/l293d-pins-1-.jpg" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"><figcaption>L293D</figcaption></figure><p>We will use a first half part of chipset, so we connect "Input 1" to RPi PIN 11, "Input 2" to RPi PIN 13  and "Enable 1" RPi PIN 15.</p><p>PIN 15 is to enable or disable motor management and PIN 11 &amp; 13 is for rotation direction of motor setting. We will power chipset by RPi 5v and in our case we will use same power source to power motor (it's not good idea but it's faster to make it's run). After we will change it by another stable power 5V.</p><p>The Fan server play with GPIO output PIN to start and stop Fan, and it's can read state of GPIO to know if the Fan is in run mode or not in all cases.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/04/GPIO-Pinout-Diagram-2-1-.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"><figcaption>Raspberry 4 PIN</figcaption></figure><p>PIN connection result:</p><!--kg-card-begin: markdown--><pre><code>RPi PIN    L293D PIN
11         2
13         7
15         1
4          16
6          4/5
2          8 # should be another PWR source
NAN        3 # Motor +
NAN        6 # Motor -</code></pre>
<!--kg-card-end: markdown--><p><em>Nota: you can use another GPIO PIN 11, 13 and 15 if you want but you must adjust server code in PIN declaration section.</em></p><p>To test connection and fan server, we can use a simple Curl command </p><!--kg-card-begin: markdown--><pre><code>curl http://[FAN-CONNECTED-NODE-IP]:5000/fan/start
curl http://[FAN-CONNECTED-NODE-IP]:5000/fan/status
curl http://[FAN-CONNECTED-NODE-IP]:5000/fan/stop</code></pre>
<!--kg-card-end: markdown--><p></p><p>When all is done, you can see the fan status in Web view with fan switch status (on/off) and you will see temperature chart evolution. When 60C° value reached, Fan will start, and chart will come down to 54C°.</p><p>Start state</p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/04/start-1.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><p>And stop state</p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/04/stop.png" class="kg-image" alt="K8S CPU Temperature & Fan monitoring for RPI"></figure><p>We can start or stop manually the cluster fan by using the Web view switch.</p><hr><h2 id="realtime-monitoring">Realtime monitoring</h2><p>See how auto-managing fan work.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/rFNTgYr3WIE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><hr><p><strong>It's done, enjoy :)</strong></p>]]></content:encoded></item><item><title><![CDATA[DevOps with your RPI K8S]]></title><description><![CDATA[We have a ready K8S RPI cluster, we will start to develop, build and deploy some applications.]]></description><link>https://blog.medinvention.dev/devops-with-own-rpi-k8s/</link><guid isPermaLink="false">5e7f9f203b33b50001ff1599</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[devops]]></category><category><![CDATA[nginx]]></category><category><![CDATA[springboot]]></category><category><![CDATA[reactjs]]></category><category><![CDATA[raspberry]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Tue, 31 Mar 2020 11:11:26 GMT</pubDate><media:content url="https://blog.medinvention.dev/content/images/2020/03/kub1-1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.medinvention.dev/content/images/2020/03/kub1-1-.png" alt="DevOps with your RPI K8S"><p>Now we have a ready K8S cluster, we will start to develop, build and deploy some applications.</p><p>Let's check cluster health,</p><!--kg-card-begin: markdown--><pre><code>NAME     STATUS   ROLES    AGE     VERSION
east     Ready    &lt;none&gt;   3d23h   v1.17.4
master   Ready    master   66d     v1.17.1
north    Ready    &lt;none&gt;   66d     v1.17.1
south    Ready    &lt;none&gt;   66d     v1.17.1
west     Ready    &lt;none&gt;   3d23h   v1.17.4</code></pre>
<!--kg-card-end: markdown--><p>So, to make an operational CI/CD pipeline, we need a good orchestrator like Jenkins. Go to deploy it.</p><hr><h2 id="orchestrator">Orchestrator</h2><p>We will create a namespace named "jenkins" with a small PVC for master:</p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: Namespace
metadata:
  name: jenkins
  
---  
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jenkins-master-pvc
  namespace: jenkins
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi</code></pre>
<!--kg-card-end: markdown--><p>Next, we will create a master deployment </p><!--kg-card-begin: markdown--><pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins-master
  namespace: jenkins
  labels:
    app: jenkins-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins-master
  template:
    metadata:
      labels:
        app: jenkins-master
    spec:
      containers:
      - name: jenkins-master
        image: medinvention/jenkins-master:arm
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        - containerPort: 50000
        volumeMounts:
        - mountPath: /var/jenkins_home
          name: jenkins-home
      volumes:
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-master-pvc
</code></pre>
<!--kg-card-end: markdown--><p>The master container exposes 2 ports; 8080 for Web access and 5000 for JNLP communication used by slave (Jenkins executor). </p><p>Let's build 2 services to expose ports </p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: Service
metadata:
  name: jenkins-master-service
  namespace: jenkins
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  selector:
    app: jenkins-master

---
apiVersion: v1
kind: Service
metadata:
  name: jenkins-slave-service
  namespace: jenkins
spec:
  ports:
  - name: jnlp
    protocol: TCP
    port: 50000
    targetPort: 50000
  selector:
    app: jenkins-master </code></pre>
<!--kg-card-end: markdown--><p>And finally, the ingress component to access Jenkins GUI from outside of cluster</p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: Secret
metadata:
  name: jenkins-tls
  namespace: jenkins
data:
  tls.crt: {{crt}}
  tls.key: {{key}}
type: kubernetes.io/tls

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: jenkins-master
  namespace: jenkins
  labels:
    app: jenkins-master
spec:
  rules:
    - host: {{host}}
      http:
        paths:
          - backend:
              serviceName: jenkins-master-service
              servicePort: http
            path: /
  tls:
    - hosts:
      - {{host}}
      secretName: jenkins-tls
</code></pre>
<!--kg-card-end: markdown--><p>You must replace {{host}} by your domain, {{crt}} and {{key}} by your SSL Certificate and Private key encoded base 64.</p><p>Nice, we have a master of Jenkins deployed </p><!--kg-card-begin: markdown--><pre><code>jenkins    jenkins-master-cww    1/1    Running   4d4h    10.244.2.150   north</code></pre>
<!--kg-card-end: markdown--><hr><h2 id="executor">Executor</h2><p>To build our application, we will use a Jenkins slave node to preserve master from surcharge.</p><p>You must configure a new node with "administration section" to get a secret token needed by slave before deploying it.</p><!--kg-card-begin: markdown--><pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jenkins-slave-pvc
  namespace: jenkins
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins-slave
  namespace: jenkins
  labels:
    app: jenkins-slave
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins-slave
  template:
    metadata:
      labels:
        app: jenkins-slave
    spec:
      containers:
      - name: jenkins-slave
        image: medinvention/jenkins-slave:arm
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: &quot;256Mi&quot;
            cpu: &quot;250m&quot;
          limits:
            memory: &quot;512Mi&quot;
            cpu: &quot;250m&quot;
        env:
          - name: &quot;JENKINS_SECRET&quot;
            value: &quot;{{jenkins-secret}}&quot;
          - name: &quot;JENKINS_AGENT_NAME&quot;
            value: &quot;exec-1&quot;
          - name: &quot;JENKINS_DIRECT_CONNECTION&quot;
            value: &quot;jenkins-slave-service.jenkins.svc.cluster.local:50000&quot;
          - name: &quot;JENKINS_INSTANCE_IDENTITY&quot;
            value: &quot;{{jenkins-id}}&quot;
        volumeMounts:
        - mountPath: /var/jenkins
          name: jenkins-home
      volumes:
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-slave-pvc
      nodeSelector:
        name: east</code></pre>
<!--kg-card-end: markdown--><p>Replace {{jenkins-secret}} by token available after creating node configuration on master, and replace {{jenkins-id}} by ID Token who is sent in HTTP Header response (use a simple curl to extract it , <a href="https://wiki.jenkins.io/display/JENKINS/Instance+Identity">@see</a>).</p><p>For our case, we add a "nodeSelector" to force pod assignment to specific node with more capacity.</p><p>After a little time, the new node will appear as connected and available.</p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/03/slave-connected.png" class="kg-image" alt="DevOps with your RPI K8S"></figure><hr><h2 id="the-application">The application</h2><p>For this post, I will use a maven project composed by 3 modules, <strong>Java Core module, Spring Boot Backend module and an Angular Front module</strong> hosted in <a href="https://github.com/mmohamed/k8s-monitoring">GitHub</a>. It's a simple monitoring application for your K8S cluster.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/monitoring.png" class="kg-image" alt="DevOps with your RPI K8S"><figcaption>Top of nodes&nbsp;</figcaption></figure><h3 id="ci-pipeline">CI Pipeline</h3><p>For our pipelines, we will use a declarative Jenkins pipeline file, we start by <strong>Build </strong>stage: checkout source and build artifact (Java by Maven and JS by Node)</p><!--kg-card-begin: markdown--><pre><code>pipeline {
   agent {
       label  'slave'
   }

   tools {
      maven &quot;AutoMaven&quot;
      nodejs &quot;AutoNode&quot;
   }

   stages {
      stage('Build') {
         steps {
            //checkout
            checkout([$class: 'GitSCM',
                branches: [[name: '*/dev']],
                doGenerateSubmoduleConfigurations: false,
                extensions: [[$class: 'SubmoduleOption',
                              disableSubmodules: false,
                              parentCredentials: false,
                              recursiveSubmodules: true,
                              reference: '',
                              trackingSubmodules: false]], 
                submoduleCfg: [], 
                userRemoteConfigs: [[url: 'https://github.com/mmohamed/k8s-monitoring.git']]])
            // Package
            sh 'mkdir -p $NODEJS_HOME/node'
            sh 'cp -n $NODEJS_HOME/bin/node $NODEJS_HOME/node'
            sh 'cp -rn $NODEJS_HOME/lib/node_modules $NODEJS_HOME/node'
            sh 'ln -sfn $NODEJS_HOME/lib/node_modules/npm/bin/npm-cli.js $NODEJS_HOME/node/npm'
            sh 'ln -sfn $NODEJS_HOME/lib/node_modules/npm/bin/npx-cli.js $NODEJS_HOME/node/npx'
            sh 'export NODE_OPTIONS=&quot;--max_old_space_size=256&quot; &amp;&amp; export REACT_APP_URL_BASE=&quot;https://{{apihostname}}/k8s&quot; &amp;&amp; export PATH=$PATH:$NODEJS_HOME/bin &amp;&amp; export NODE_PATH=$NODEJS_HOME &amp;&amp; mvn install'
            // Copy artifact to Docker build workspace
            sh 'mkdir -p ./service/target/dependency &amp;&amp; (cd service/target/dependency; jar -xf ../*.jar) &amp;&amp; cd ../..'  
            sh 'mkdir -p ./service/target/_site &amp;&amp; cp -r ./webapp/target/classes/static/* service/target/_site'   
         }
      }
      ....
   }
}</code></pre>
<!--kg-card-end: markdown--><p><em>I have already configured Node and Maven tools configuration on "Administration section". </em></p><p>In this stage, we start by preparing tools and some environment variables for Node binaries, after we must change {{apihostname}} by our API application hostname for Front component building and call "maven install" goal.</p><p>After artifact building, we will extract jar content (it's made container starting faster) and copy ReactJs output files to target directory for copying it to Docker builder node.</p><p>Next stage, <strong>Prepare Docker workspace</strong> to build images: first copy target directory content to Docker workspace (we have defined a node credentials "SSHMaster" in Jenkins master), and we create and copy 2 Docker files, first with OpenJDK for backend java module and second with a simple Nginx for front Web module.</p><!--kg-card-begin: markdown--><pre><code>....

stage('Prepare Workspace'){
         steps{
            // Prepare Docker workspace
            withCredentials([sshUserPrivateKey(credentialsId: &quot;SSHMaster&quot;, keyFileVariable: 'keyfile')]) {
                sh &quot;ssh -i ${keyfile} [USER]@[NODEIP] 'mkdir -p ~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER'&quot;
                sh &quot;scp -i ${keyfile} -r service/target [USER]@[NODEIP]:~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER&quot;
            }
            // Create Dockerfile for api
            writeFile file: &quot;./Dockerfile.api&quot;, text: '''
FROM arm32v7/openjdk:8-jdk 
ARG user=spring
ARG group=spring
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group} &amp;&amp; useradd -u ${uid} -g ${gid} -m -s /bin/bash ${user}
ARG DEPENDENCY=target/dependency
COPY --chown=spring:spring ${DEPENDENCY}/BOOT-INF/lib /var/app/lib
COPY --chown=spring:spring ${DEPENDENCY}/META-INF /var/app/META-INF
COPY --chown=spring:spring ${DEPENDENCY}/BOOT-INF/classes /var/app
USER ${user}
ENTRYPOINT [&quot;java&quot;,&quot;-cp&quot;,&quot;var/app:var/app/lib/*&quot;,&quot;dev.medinvention.service.Application&quot;]'''
            // Create Dockerfile for front
            writeFile file: &quot;./Dockerfile.front&quot;, text: '''
FROM nginx
EXPOSE 80
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY target/_site/ /usr/share/nginx/html'''
            // Create config for front
            writeFile file: &quot;./nginx.conf&quot;, text: '''
server {
    listen       80;
    server_name  localhost;
    location / {
        root   /usr/share/nginx/html;
        try_files $uri /index.html;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }    
}'''
            // copy docker and config file
            withCredentials([sshUserPrivateKey(credentialsId: &quot;SSHMaster&quot;, keyFileVariable: 'keyfile')]) {
                sh &quot;scp -i ${keyfile} Dockerfile.api [USER]@[NODEIP]:~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER&quot;
                sh &quot;scp -i ${keyfile} Dockerfile.front [USER]@[NODEIP]:~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER&quot;
                sh &quot;scp -i ${keyfile} nginx.conf [USER]@[NODEIP]:~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER&quot;
            }
         }
      }
      
....</code></pre>
<!--kg-card-end: markdown--><p>Now, we can start building our images with <strong>Docker build</strong> stage.</p><!--kg-card-begin: markdown--><pre><code>....

stage('Docker build'){
         steps{
            withCredentials([sshUserPrivateKey(credentialsId: &quot;SSHMaster&quot;, keyFileVariable: 'keyfile')]) {
               sh &quot;ssh -i ${keyfile} [USER]@[NODEIP] 'docker build ~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER -f ~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER/Dockerfile.api -t medinvention/k8s-monitoring-api:arm'&quot;
               sh &quot;ssh -i ${keyfile} [USER]@[NODEIP] 'docker push medinvention/k8s-monitoring-api:arm'&quot;
               sh &quot;ssh -i ${keyfile} [USER]@[NODEIP] 'docker rmi medinvention/k8s-monitoring-api:arm'&quot;
               sh &quot;ssh -i ${keyfile} [USER]@[NODEIP] 'docker build ~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER -f ~/s2i-k8S/k8s-monitoring-$BUILD_NUMBER/Dockerfile.front -t medinvention/k8s-monitoring-front:arm'&quot;
               sh &quot;ssh -i ${keyfile} [USER]@[NODEIP] 'docker push medinvention/k8s-monitoring-front:arm'&quot;
               sh &quot;ssh -i ${keyfile} [USER]@[NODEIP] 'docker rmi medinvention/k8s-monitoring-front:arm'&quot;
            }
         }
      }
      
....</code></pre>
<!--kg-card-end: markdown--><p>Don't forget to replace [USER] and [NODEIP] by your docker build node (it's can be any of available cluster node)</p><p>And finally, start <strong>Kubernetes deployment</strong> stage.</p><!--kg-card-begin: markdown--><pre><code>....

stage('Kubernetes deploy'){
         steps{
            // deploy
            withCredentials([string(credentialsId: 'KubeToken', variable: 'TOKEN'),
                  string(credentialsId: 'TLSKey', variable: 'KEY'),
                  string(credentialsId: 'TLSCrt', variable: 'CRT')
               ]) {
               sh &quot;export TOKEN=$TOKEN &amp;&amp; export CRT=$CRT &amp;&amp; export KEY=$KEY&quot;
               sh &quot;cd k8s &amp;&amp; sh deploy.sh&quot;
            }  
         }
      }
....</code></pre>
<!--kg-card-end: markdown--><p>For deployment, we can choose <a href="https://helm.sh/">Helm packager</a> to make our deployment more industrial or we can make our deployment script like this case </p><!--kg-card-begin: markdown--><pre><code>#!/bin/bash

if [ -z &quot;$CRT&quot; ] || [ -z &quot;$KEY&quot; ]; then
    echo &quot;TLS CRT/KEY environment value not found !&quot;
    exit 1
fi

if [ -z &quot;$TOKEN&quot; ]; then
    echo &quot;Kube Token environment value not found !&quot;
    exit 1
fi

echo &quot;Get Kubectl&quot;
curl -s -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/arm/kubectl
chmod +x ./kubectl

commitID=$(git log -1 --pretty=&quot;%H&quot;)

if [ $? != 0 ] || [ -z &quot;$commitID&quot; ]; then
    echo &quot;Unable to determinate CommitID !&quot;
    exit 1
fi

echo &quot;Deploy for CommitID : ${commitID}&quot;

# create new deploy
sed -i &quot;s|{{crt}}|`echo $CRT`|g&quot; api.yaml
sed -i &quot;s|{{key}}|`echo $KEY`|g&quot; api.yaml
sed -i &quot;s|{{host}}|[BACKENDHOSTNAME]|g&quot; api.yaml
sed -i &quot;s|{{commit}}|`echo $commitID`|g&quot; api.yaml

./kubectl --token=$TOKEN apply -f api.yaml
if [ $? != 0 ]; then
    echo &quot;Unable to deploy API !&quot;
    exit 1
fi	

# wait for ready
attempts=0
rolloutStatusCmd=&quot;./kubectl --token=$TOKEN rollout status deployment/api -n monitoring&quot;
until $rolloutStatusCmd || [ $attempts -eq 60 ]; do
  $rolloutStatusCmd
  attempts=$((attempts + 1))
  sleep 10
done

# create new deploy
sed -i &quot;s|{{crt}}|`echo $CRT`|g&quot; front.yaml
sed -i &quot;s|{{key}}|`echo $KEY`|g&quot; front.yaml
sed -i &quot;s|{{host}}|[FRONTHOSTNAME]|g&quot; front.yaml
sed -i &quot;s|{{commit}}|`echo $commitID`|g&quot; front.yaml

./kubectl --token=$TOKEN apply -f front.yaml
if [ $? != 0 ]; then
    echo &quot;Unable to deploy Front !&quot;
    exit 1
fi	

# wait for ready
attempts=0
rolloutStatusCmd=&quot;./kubectl --token=$TOKEN rollout status deployment/front -n monitoring&quot;
until $rolloutStatusCmd || [ $attempts -eq 60 ]; do
  $rolloutStatusCmd
  attempts=$((attempts + 1))
  sleep 10
done</code></pre>
<!--kg-card-end: markdown--><p>In this script, we start by installing local "kubectl" client, and check "kube" access token. Next we try to deploy backend first and if we have problem with, we stop deployment process. If success deployment of backend, we proceed with front deployment.</p><p>To make Jenkins access to cluster, we need to generate an access token with a <strong>ClusterRoleBinding </strong>resource.</p><!--kg-card-begin: markdown--><pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: jenkins-access
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: jenkins-access
    namespace: jenkins</code></pre>
<!--kg-card-end: markdown--><p>After running this command to get token, create a secret credential for Jenkins with token content</p><!--kg-card-begin: markdown--><pre><code>kubectl -n jenkins describe secret $(kubectl -n jenkins get secret | grep jenkins-access | awk '{print $1}')  </code></pre>
<!--kg-card-end: markdown--><p>And finally, for backend deployment </p><!--kg-card-begin: markdown--><pre><code>---
apiVersion: v1
kind: Namespace
metadata:
   name: monitoring

---   
apiVersion: v1
kind: Secret
metadata:
  name: monitoring-tls
  namespace: monitoring
type: Opaque  
data:
  tls.crt: {{crt}}
  tls.key: {{key}}
type: kubernetes.io/tls

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: monitoring-ingress
  namespace: monitoring
  labels:
    app: api
spec:
  rules:
    - host: {{host}}
      http:
        paths:
          - backend:
              serviceName: monitoring-service
              servicePort: http
            path: /
  tls:
    - hosts:
      - {{host}}
      secretName: monitoring-tls

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: monitoring
  labels:
    app: api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
        commit: '{{commit}}'
    spec:
      serviceAccountName: api-access 
      containers:
        - name: api
          image: medinvention/k8s-monitoring-api:arm
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 2
            periodSeconds: 3
            failureThreshold: 1
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 300
            timeoutSeconds: 5
            periodSeconds: 60
            failureThreshold: 1

---
apiVersion: v1
kind: Service
metadata:
  name: monitoring-service
  namespace: monitoring
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  selector:
    app: api

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
  name: api-access
  namespace: monitoring

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: api-access
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: api-access
    namespace: monitoring</code></pre>
<!--kg-card-end: markdown--><p>And the front</p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: Secret
metadata:
  name: front-tls
  namespace: monitoring
type: Opaque  
data:
  tls.crt: {{crt}}
  tls.key: {{key}}
type: kubernetes.io/tls

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: front
  namespace: monitoring
  labels:
    app: front
spec:
  rules:
    - host: {{host}}
      http:
        paths:
          - backend:
              serviceName: front-service
              servicePort: http
            path: /
  tls:
    - hosts:
      - {{host}}
      secretName: front-tls

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: front
  namespace: monitoring
  labels:
    app: front
spec:
  replicas: 1
  selector:
    matchLabels:
      app: front
  template:
    metadata:
      labels:
        app: front
        commit: '{{commit}}'
    spec:
      containers:
      - name: front
        image: medinvention/k8s-monitoring-front:arm
        imagePullPolicy: Always
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: front-service
  namespace: monitoring
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  selector:
    app: front
</code></pre>
<!--kg-card-end: markdown--><p><strong>Important</strong>: if you use same image tag with every build, we must specify "imagePullPolicy: Always" to force kube pulling image with every deployment.</p><hr><h2 id="result">Result</h2><p>The S2I Job </p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/03/jenkins-job-s2i.png" class="kg-image" alt="DevOps with your RPI K8S"></figure><p></p><p>It's done, you have an operational CI/CD pipeline for your development environment. When you push a new code, your pipeline will be executed automatically to test, build, and deploy your application.</p><p>Complete source code available <a href="https://github.com/mmohamed/k8s-raspberry/tree/master/s2i/k8s-monitoring">@here</a>.</p><p></p><h2 id="enjoy"><em>Enjoy</em></h2>]]></content:encoded></item><item><title><![CDATA[RPI K8S Starting...]]></title><description><![CDATA[<p>I have a fresh cluster installed and i will start with a simple deployment of static website.</p><hr><h2 id="docker-image">Docker Image</h2><p>For our Website, we will use Nginx for arm32 architecture.</p><!--kg-card-begin: markdown--><pre><code>FROM nginx

EXPOSE 80

COPY _site/ /usr/share/nginx/html</code></pre>
<!--kg-card-end: markdown--><p>Just copy static website source code with index file to "_site"</p>]]></description><link>https://blog.medinvention.dev/rpi-k8s-starting/</link><guid isPermaLink="false">5e810b2c3b33b50001ff18ce</guid><category><![CDATA[raspberry]]></category><category><![CDATA[kubernetes]]></category><category><![CDATA[nginx]]></category><category><![CDATA[ingress]]></category><category><![CDATA[kubectl]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Mon, 30 Mar 2020 10:47:07 GMT</pubDate><media:content url="https://blog.medinvention.dev/content/images/2020/03/IMG_20200325_204847.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.medinvention.dev/content/images/2020/03/IMG_20200325_204847.jpg" alt="RPI K8S Starting..."><p>I have a fresh cluster installed and i will start with a simple deployment of static website.</p><hr><h2 id="docker-image">Docker Image</h2><p>For our Website, we will use Nginx for arm32 architecture.</p><!--kg-card-begin: markdown--><pre><code>FROM nginx

EXPOSE 80

COPY _site/ /usr/share/nginx/html</code></pre>
<!--kg-card-end: markdown--><p>Just copy static website source code with index file to "_site" directory in Docker context and just build image, tag then push it to "docker.io" registry. It will be available to our cluster.</p><!--kg-card-begin: markdown--><pre><code>docker build . -t {{yourrepository}}/home
docker tag {{yourrepository}}/home:latest {{yourrepository}}/home:arm
docker push {{yourrepository}}/home:arm</code></pre>
<!--kg-card-end: markdown--><p>We must be logged to Docker hub registry before pushing newly image and replacing {{yourrepository}} by our Docker Hub username.</p><hr><h2 id="kubernetes-deployment">Kubernetes deployment</h2><p>After prepared image, it's time to deploy it.</p><h3 id="namespace">Namespace</h3><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: Namespace
metadata:
  name: front</code></pre>
<!--kg-card-end: markdown--><p>Create a simple namespace to be a scope for our project.</p><h3 id="ssl-secret">SSL Secret</h3><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: Secret
metadata:
  name: home-tls
  namespace: front
type: Opaque  
data:
  tls.crt: {{crt}}
  tls.key: {{key}}
type: kubernetes.io/tls</code></pre>
<!--kg-card-end: markdown--><p>To have a HTTPS access to our website, we need to get SSL certificate and it's easy and free with <a href="https://letsencrypt.org/">letsencrypt.org</a> by just installing CertBot in any node of your cluster and use it to generate certificate for your domain.</p><p>You can use "TXT Record" identification method for certificate generation: CertBot give you a string token and you must add it to your DNS configuration of your domain as "TXT Record" to continue generation.</p><p>After, you must replace {{crt}} by<strong> fullchain and not just cert.pem</strong> certificate encoded base64, like {{key}} who is must be replace by the private key encoded also.</p><!--kg-card-begin: markdown--><pre><code># private key
cat privkey.pem | base64
# fullchain certificate
cat fullchain.pem | base64</code></pre>
<!--kg-card-end: markdown--><h3 id="deployment-service">Deployment &amp; Service</h3><!--kg-card-begin: markdown--><pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: home
  namespace: front
  labels:
    app: home
spec:
  replicas: 1
  selector:
    matchLabels:
      app: home
  template:
    metadata:
      labels:
        app: home
    spec:
      containers:
      - name: home
        image: medinvention/home:arm
        imagePullPolicy: Always
        ports:
        - containerPort: 80</code></pre>
<!--kg-card-end: markdown--><p>For our deployment, we define one container for built image and export one port 80 for web access. We defined a label selector "app: home" to bind pod to service and create a HTTP end point to access pod service from outside of cluster.</p><!--kg-card-begin: markdown--><pre><code>apiVersion: v1
kind: Service
metadata:
  name: home-service
  namespace: front
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  selector:
    app: home</code></pre>
<!--kg-card-end: markdown--><p>With this way, we have bound "home-servcice" to HTTP port of container inside pod deployed. "Kube" use match label configuration to make association between service and pod and define end point.</p><h3 id="ingress">Ingress</h3><!--kg-card-begin: markdown--><pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: home
  namespace: front
  labels:
    app: home
spec:
  rules:
    - host: {{host}}
      http:
        paths:
          - backend:
              serviceName: home-service
              servicePort: http
            path: /
  tls:
    - hosts:
      - {{host}}
      secretName: home-tls</code></pre>
<!--kg-card-end: markdown--><p>To make our service accessible from outside of cluster, we define an "Ingress" object to publish home service using Nginx <strong>Load Balancer</strong>. Change {{host}} by your host and apply it.</p><p>If you check the <strong>Load Balancer</strong> service, you will find dedicated ports like "80:31782/TCP,443:31179/TCP". You must configure your internet router (or box) to NAT 443 external port to 31179 port of master node.</p><hr><h2 id="it-s-done-enjoy-"><a href="https://medinvention.dev">It's done , enjoy :)</a></h2><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/03/homeland-2.png" class="kg-image" alt="RPI K8S Starting..."></figure>]]></content:encoded></item><item><title><![CDATA[Kubernetes On Raspberry]]></title><description><![CDATA[<p>By this memo, i purpose you to discover how deploy a Kubernetes K8S solution on your own cluster composed by some raspberry.</p><p></p><h2 id="preparing-nodes">Preparing nodes </h2><p>We need Raspbian Release with ability to support Docker, so we will use HypriotOS available <a href="https://github.com/hypriot/image-builder-rpi/releases">here</a>.</p><!--kg-card-begin: markdown--><pre><code>curl -OJSLs https://github.com/hypriot/image-builder-rpi/releases/download/v1.</code></pre>]]></description><link>https://blog.medinvention.dev/kubernetes-on-raspberry/</link><guid isPermaLink="false">5e77bd7f523f400001c4dada</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[devops]]></category><category><![CDATA[raspberry]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Wed, 25 Mar 2020 20:39:35 GMT</pubDate><media:content url="https://blog.medinvention.dev/content/images/2020/03/IMG_20200325_205140.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.medinvention.dev/content/images/2020/03/IMG_20200325_205140.jpg" alt="Kubernetes On Raspberry"><p>By this memo, i purpose you to discover how deploy a Kubernetes K8S solution on your own cluster composed by some raspberry.</p><p></p><h2 id="preparing-nodes">Preparing nodes </h2><p>We need Raspbian Release with ability to support Docker, so we will use HypriotOS available <a href="https://github.com/hypriot/image-builder-rpi/releases">here</a>.</p><!--kg-card-begin: markdown--><pre><code>curl -OJSLs https://github.com/hypriot/image-builder-rpi/releases/download/v1.12.0/hypriotos-rpi-v1.12.0.img.zip
unzip hypriotos-rpi-v1.12.0.img.zip</code></pre>
<!--kg-card-end: markdown--><p>Now we need utility for Flashing SD Card with forcing some instance parameters, like user, hostname, ... Hypriotos Flash Tool will be most efficience and available <a href="https://github.com/hypriot/flash">here</a>.</p><!--kg-card-begin: markdown--><pre><code>curl -LO https://github.com/hypriot/flash/releases/download/2.5.0/flash
chmod +x flash &amp;&amp; sudo mv flash /usr/local/bin/flash</code></pre>
<!--kg-card-end: markdown--><p>Flashing now, i have one master node and four worker nodes</p><!--kg-card-begin: markdown--><pre><code>flash --hostname master hypriotos-rpi-v1.12.0.img
flash --hostname north hypriotos-rpi-v1.12.0.img
flash --hostname south hypriotos-rpi-v1.12.0.img
flash --hostname east hypriotos-rpi-v1.12.0.img
flash --hostname west hypriotos-rpi-v1.12.0.img</code></pre>
<!--kg-card-end: markdown--><p></p><h2 id="configuring">Configuring</h2><h3 id="i-inventary">I - Inventary</h3><p>To configure our cluster, we will use some Ansible playbook, it's more powerful and can be reused with new nodes if we want.</p><p>We start by dowloading Ansible playbook available <a href="https://github.com/mmohamed/k8s-raspberry/tree/master/ansible">here</a>.</p><p>Then, wi start by adjusing host configuration ; setting of master and worker IP, next setting of ssh user and password (for me i use same credentials for all nodes for simplicity)</p><!--kg-card-begin: markdown--><pre><code>sed &quot;s/{{masterip}}/[MASTERIP]/&quot; hosts.dist &gt; hosts 
sed -i &quot;s/{{northip}}/[NORTHIP]/&quot; hosts 
sed -i &quot;s/{{southip}}/[SOUTHIP]/&quot;  hosts 
sed -i &quot;s/{{eastip}}/[eastip]/&quot; hosts 
sed -i &quot;s/{{westip}}/[westip]/&quot;  hosts 

sed &quot;s/{{user}}/[USER]/&quot; group_vars/all.yml.dist &gt; group_vars/all.yml
sed -i &quot;s/{{password}}/[PASS]/&quot; group_vars/all.yml</code></pre>
<!--kg-card-end: markdown--><p></p><h3 id="ii-preparing-os">II- Preparing OS</h3><p>Executing "bootstrap", will configure all raspberry with enabling cgroup for memory , cpu and disabling all swap.</p><!--kg-card-begin: markdown--><pre><code>ansible-playbook bootstrap.yml -i hosts --verbose</code></pre>
<!--kg-card-end: markdown--><p>Next step, installing all common dependencies for Kubernetes on master node before initializing cluster with KubeAdmin.</p><!--kg-card-begin: markdown--><pre><code>ansible-playbook master.yml -i hosts --verbose</code></pre>
<!--kg-card-end: markdown--><p>Finally, install dependencies on worker nodes and join master node.</p><!--kg-card-begin: markdown--><pre><code>ansible-playbook node.yml -i hosts --verbose</code></pre>
<!--kg-card-end: markdown--><p></p><h2 id="set-up-cluster">Set Up Cluster</h2><h3 id="i-cni">I- CNI</h3><p>Kubernetes need a network plugin to manage cluster intra-communication. For our project, i choose to use Flannel with some adjucing (like ARM image arch).</p><!--kg-card-begin: markdown--><pre><code>kubectl create -f kube/flannel.yml
kubectl create -f kube/kubedns.yml
# Must be done on all node
sudo sysctl net.bridge.bridge-nf-call-iptables=1</code></pre>
<!--kg-card-end: markdown--><p></p><h3 id="ii-ingress">II- Ingress</h3><p>Ingress manages external access to the services in a cluster, may provide load balancing, SSL termination and name-based virtual hosting. For ower project we use Nginx Ingress.</p><!--kg-card-begin: markdown--><pre><code>helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install nginx-ingress stable/nginx-ingress --set defaultBackend.image.repository=docker.io/medinvention/ingress-default-backend,controller.image.repository=quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm,defaultBackend.image.tag=latest,controller.image.tag=0.27.1
helm install ingress stable/nginx-ingress --set controller.hostNetwork=true,controller.kind=DaemonSet</code></pre>
<!--kg-card-end: markdown--><p>For Cluster public IP </p><!--kg-card-begin: markdown--><pre><code># Check public IP if set
kubectl get svc ingress-nginx-ingress-controller -o jsonpath=&quot;{.status.loadBalancer.ingress[0].ip}&quot;
# You can set it manual
kubectl patch svc nginx-ingress-controller -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;, &quot;externalIPs&quot;:[&quot;[YOUR-PUBLIC-IP]&quot;]}}'
</code></pre>
<!--kg-card-end: markdown--><p><strong>Trooblshooting</strong></p><p>If Pod cannot communicate, or if CoreDNS canno't be ready, run in all node</p><!--kg-card-begin: markdown--><pre><code>sudo systemctl stop docker
sudo iptables -t nat -F
sudo iptables -P FORWARD ACCEPT
sudo ip link del docker0
sudo ip link del flannel.1
sudo systemctl start docker</code></pre>
<!--kg-card-end: markdown--><p></p><h3 id="iii-storage">III- Storage</h3><p>For Storage solution, we can deploy a CEPH server or NFS service and configure cluster to use it. For our project, we will install NFS server on master node</p><!--kg-card-begin: markdown--><pre><code>sudo apt-get install nfs-kernel-server nfs-common
sudo systemctl enable nfs-kernel-server
sudo systemctl start nfs-kernel-server

sudo cat &gt;&gt; /etc/exports &lt;&lt;EOF
/data/kubernetes-storage/ north(rw,sync,no_subtree_check,no_root_squash)
/data/kubernetes-storage/ south(rw,sync,no_subtree_check,no_root_squash)
/data/kubernetes-storage/ east(rw,sync,no_subtree_check,no_root_squash)
/data/kubernetes-storage/ west(rw,sync,no_subtree_check,no_root_squash)
EOF

sudo exportfs -a  </code></pre>
<!--kg-card-end: markdown--><p>On worker node</p><!--kg-card-begin: markdown--><pre><code>sudo apt-get install nfs-common</code></pre>
<!--kg-card-end: markdown--><p>Next, we deploy NFS service</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f storage/nfs-deployment.yml</code></pre>
<!--kg-card-end: markdown--><p>It's will create new storage class and marked it to default then deploy storage pod. </p><p>Now to test configuration</p><!--kg-card-begin: markdown--><pre><code>kubectl apply -f storage/nfs-testing.yml</code></pre>
<!--kg-card-end: markdown--><p></p><h2 id="tricks">Tricks</h2><p></p><h3 id="i-cluster-backup">I- Cluster backup</h3><p>To backup our cluster, we need to run</p><pre><code class="language-shell">./os/backup.sh # cluset data will be saved in ~/bkp</code></pre><p></p><h3 id="ii-cluster-tear-down">II- Cluster tear down</h3><p>To reset cluster, just run on master node</p><!--kg-card-begin: markdown--><pre><code>kubeadm reset</code></pre>
<!--kg-card-end: markdown--><p></p><h2 id="it-s-ready">It's Ready</h2><p>After a little time, all nodes must be ready </p><!--kg-card-begin: markdown--><pre><code>kubectl get nodes -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
east     Ready    &lt;none&gt;   23h   v1.17.4   192.168.1.30   &lt;none&gt;        Raspbian GNU/Linux 10 (buster)   4.19.75-v7l+     docker://19.3.5
master   Ready    master   63d   v1.17.1   192.168.1.17   &lt;none&gt;        Raspbian GNU/Linux 10 (buster)   4.19.75-v7+      docker://19.3.5
north    Ready    &lt;none&gt;   63d   v1.17.1   192.168.1.54   &lt;none&gt;        Raspbian GNU/Linux 10 (buster)   4.19.75-v7+      docker://19.3.5
south    Ready    &lt;none&gt;   63d   v1.17.1   192.168.1.11   &lt;none&gt;        Raspbian GNU/Linux 10 (buster)   4.19.75-v7+      docker://19.3.5
west     Ready    &lt;none&gt;   23h   v1.17.4   192.168.1.85   &lt;none&gt;        Raspbian GNU/Linux 10 (buster)   4.19.75-v7l+     docker://19.3.5</code></pre>
<!--kg-card-end: markdown--><p></p><h2 id="enjoy-">Enjoy ...</h2><p></p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/03/IMG_20200325_205017.jpg" class="kg-image" alt="Kubernetes On Raspberry"></figure>]]></content:encoded></item><item><title><![CDATA[My Openshift]]></title><description><![CDATA[<p></p><p>Aloha, I propose you a general presentation of <a href="https://www.openshift.com/">Openshift</a> with a <a href="https://symfony.com/">Symfony 4</a> application as a use case.<br><br>First, we start with presentations </p><blockquote><em>I'm an open source container application platform based on Kubernetes container orchestrator for enterprise application development and deployment ¹</em></blockquote><p><br>What can I say in a few words about</p>]]></description><link>https://blog.medinvention.dev/my-openshift/</link><guid isPermaLink="false">5e73e20e9521960001998e93</guid><category><![CDATA[openshift]]></category><category><![CDATA[devops]]></category><category><![CDATA[symfony]]></category><category><![CDATA[memcache]]></category><dc:creator><![CDATA[Marouan MOHAMED]]></dc:creator><pubDate>Fri, 20 Mar 2020 13:57:32 GMT</pubDate><media:content url="https://blog.medinvention.dev/content/images/2020/03/red-hat-openshift-vector-logo-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.medinvention.dev/content/images/2020/03/red-hat-openshift-vector-logo-1.png" alt="My Openshift"><p></p><p>Aloha, I propose you a general presentation of <a href="https://www.openshift.com/">Openshift</a> with a <a href="https://symfony.com/">Symfony 4</a> application as a use case.<br><br>First, we start with presentations </p><blockquote><em>I'm an open source container application platform based on Kubernetes container orchestrator for enterprise application development and deployment ¹</em></blockquote><p><br>What can I say in a few words about Openshift </p><ul><li>It's a<strong><strong> PaaS </strong></strong>: Plateform-as-a-service</li><li>Three offers availables : <strong><strong>Online (used in this Post), Online Dedicated and private (not hosted at RedHat)</strong></strong></li><li>It allows <strong><strong>build, deploy and run</strong></strong> applications in <strong><strong>containers</strong></strong></li><li>His configuration is based on a <strong><strong>Docker container engine</strong></strong> with <strong><strong>Kubernetes Orchestrator</strong></strong></li><li>It offers a <strong><strong>microservices</strong></strong> oriented <strong><strong>architecture</strong></strong>.</li></ul><p><br>Among the services of Openshift platform</p><ul><li>Containerization (<strong><strong>Source to images ², Docker Repository, Image Stream</strong></strong>)</li><li><strong><strong>Route </strong></strong>&amp; <strong><strong>LoaderBalancer</strong></strong></li><li><strong><strong>Shared Storage</strong></strong></li><li>Resource Management (<strong><strong>Quota, Membership, ConfigMap, Secret</strong></strong>,...)</li><li><strong><strong>Monitoring </strong></strong>(Elasticsearch, Fluentd, Kibana) &amp; <strong><strong>Readiness / Liveness</strong></strong></li></ul><p><br><strong>Real life</strong><br>This is a simple example of a web application deployed in Openshift cluster.<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/Openshift-Example-Arch-1-.png" class="kg-image" alt="My Openshift"><figcaption><i>Reminder : An Openshift Cluster is a set of virtual or physical machines, containing nodes (at least a master), they contain Pods that are the result of the Run of a docker image.</i></figcaption></figure><p>The client (which may be a web browser) requests an URL (an external route to the cluster) of the Web application, the platform Router will dispatch this request to the concerned service (this may be an opening of port 80 of the frontal Pod).<br><br>The frontal Pod can then be considered as the FontEnd controller of our application, and which will use the business application also deployed in a Pod in Backend and in communication with the Pod of database (on a TCP service on 3306 for example with MySQL database).<br><br>Finally, all Pods use the distributed storage system managed by the platform on virtual file system mounts.<br><br><em>Typically, when you have multiple clusters, you define a front cluster (perhaps behind a proxy) to expose your services to outside of the cluster and another back cluster that will embed a business applications as well as access to the data. Both of these clusters will have different security configurations and requirements, which will allow us to isolate and protect business data. </em><br><br><br><br><strong>Use case with a Symfony application</strong><br>It's a standard Symfony 4 demonstration application with a few modifications to add Memcached support for example and using of MySQL database instead of SQLLite. Source is  available <a href="https://github.com/mmohamed/demo">here</a> (branch "openshift"). <br></p><p><br><strong>Let's start !</strong><br>To begin, we need an OpenShift instance, so there simple registration to demo program (<a href="https://manage.openshift.com/">Starter Pack</a>), gives us right to 2GB of memory, 2GB of disk and 4 CPU core free for 60 days.<br><br>After login to Web console, we have a view of catalog of images and services supported by the platform:<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/Catalog-osh-1-.png" class="kg-image" alt="My Openshift"><figcaption>Catalog view with our project "MedInvention project"</figcaption></figure><p>So we start by creating a project (we're entitled to one at a time). For Openshift, this project is a namespace with own rules of resource management, access and some isolation with other namespaces.<br>For our Post, we will need an instance of Apache with PHP 7, Opcache and Memcached installed. The PHP image proposed by the platform does not support Memcached, for this reason that we will use a dedicated image available on  <a href="https://cloud.docker.com/u/medinvention/repository/docker/medinvention/s2i-ubuntu-php72-apache24">DockerHub</a> (Tag "openshift" and source available on <a href="https://github.com/mmohamed/s2i-ubuntu-php72-apache24">GitHub</a>).<br>However, if we want to use the proposed image, we go through the catalog and we will have this<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-php-first-1-.png" class="kg-image" alt="My Openshift"><figcaption>You can use CakePHP demonstration application available on GitHub or consult details of image (environment variables and characteristics)</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-php-git-1-.png" class="kg-image" alt="My Openshift"><figcaption>You must specify a PHP version and Git Repository of your PHP application</figcaption></figure><p>And if we want to use our image, we go through "<strong><strong>Add to Project</strong></strong>" then "<strong><strong>Import YAML / JSON</strong></strong> "</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-import-yaml-1-.png" class="kg-image" alt="My Openshift"><figcaption>Import a BuildConfig for Docker image building</figcaption></figure><p>We can copy / paste this content</p><!--kg-card-begin: markdown--><pre><code>apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  labels:
    app: webapp
  name: webapp
spec:
  output:
    to:
      kind: ImageStreamTag
      name: 'webapp:latest'
  source:
    git:
      ref: openshift
      uri: 'https://github.com/mmohamed/demo.git'
    type: Git
  strategy:
    sourceStrategy:
      from:
        kind: DockerImage
        name: 'medinvention/s2i-ubuntu-php72-apache24:openshift'
    type: Source</code></pre>
<!--kg-card-end: markdown--><p>Then we have to define an external route for application</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-create-route-1-.png" class="kg-image" alt="My Openshift"><figcaption>Case where an image of catalog is used, this route will be provisioned automatically. It should point to the webapp service that exposes 8080 port of our frontend Pod</figcaption></figure><p>We can also go through "<strong><strong> Add to Project</strong></strong> " then " <strong><strong>Deploy image</strong></strong> " (Without Git sources) we will have</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-php-custom-1-.png" class="kg-image" alt="My Openshift"><figcaption>We see that we will be entitled to a DeployementConfig of image, that port 8080 will be exposed with service and container will be accessible on hostname s2i-ubuntu-php72-apache</figcaption></figure><p><em>/!\ Tip: You can use the default PHP image to initiate S2I DeploymentConfig with our GitHub Repository, then modify the configuration to use our specific image without having to define a complete BuildConfig.</em><br></p><figure class="kg-card kg-image-card"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-change-image-1-.png" class="kg-image" alt="My Openshift"></figure><p>So, now we have a functional Build and DeploymentConfig, our first Build and our first Deploy are completed successfully. We have a Pod in Run and Ready to process queries.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-webapp-overview-1-.png" class="kg-image" alt="My Openshift"><figcaption>External route available to access our application Pod</figcaption></figure><p><br>It then remains the deployment of a database image (we will use <strong><strong>MySQL </strong></strong>from catalog), and a MemCached server image. For Memcached, we will use a <a href="https://github.com/sclorg/memcached/blob/master/openshift-template.yml">YAML Template </a>, because it does not exist in catalog, with the import of object definitions in <strong><strong>YAML / JSON</strong></strong> functionality to import this template and apply it.<br><br><br>Now, we must have a complete environment</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-overview-without-jenkins-1-.png" class="kg-image" alt="My Openshift"><figcaption>Our environment</figcaption></figure><p>We can access to our application with generated public URL<br></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-view-app-1-.png" class="kg-image" alt="My Openshift"><figcaption>You can browse application to check connection to database and communication with Memcached server</figcaption></figure><p>One last tool is missing for continuous deployment; there we can use Jenkins which is proposed by platform in catalog.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-overview-full-1-.png" class="kg-image" alt="My Openshift"><figcaption>Default, Jenkins will have his own public route and a single Job definition.</figcaption></figure><p>We just have to update our Jenkins Job to trigger a build, followed by a deployment when it detects a change in our application sources (hosted at GitHub)</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/OSH-jenkins-1-.png" class="kg-image" alt="My Openshift"><figcaption>A Job to deploy, but several forming a Pipeline with a build stage, Test and Deploy would be better of course</figcaption></figure><p>We can see Logs of Job run</p><!--kg-card-begin: markdown--><pre><code>Started by an SCM change

No credentials specified
 &gt; git rev-parse --is-inside-work-tree # timeout=10
....
 &gt; git rev-list --no-walk b9feb776306e16bcba20a6d371e9e83a838d1a2b # timeout=10

Starting &quot;Scale OpenShift Deployment&quot; with deployment config &quot;memcached&quot; from the project &quot;demo-medinvention&quot;.
 Scaling to &quot;0&quot; replicas and verifying the replica count is reached ...
Operation will timeout after 180000 milliseconds

Exiting &quot;Scale OpenShift Deployment&quot; successfully, where the deployment &quot;memcached-2&quot; reached &quot;0&quot; replica(s).

Starting the &quot;Trigger OpenShift Build&quot; step with build config &quot;webapp&quot; from the project &quot;demo-medinvention&quot;.
  Started build &quot;webapp-9&quot; and waiting for build completion ...
Operation will timeout after 900000 milliseconds
Pulling image &quot;medinvention/s2i-ubuntu-php72-apache24:openshift&quot; ...
Using medinvention/s2i-ubuntu-php72-apache24:openshift as the s2i builder image
---&gt; Installing application source...
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
....
Generating optimized autoload files
ocramius/package-versions:  Generating version class...
ocramius/package-versions: ...done generating version class
Executing script cache:clear [OK]
Executing script assets:install --symlink --relative public [OK]

Executing script security-checker security:check [OK]
---&gt; Clear cache/logs and fixing permissions...

Pushing image docker-registry.default.svc:5000/demo-medinvention/webapp:latest ...
Pushed 0/17 layers, 0% complete
....
Pushed 17/17 layers, 100% complete
Push successful


Exiting &quot;Trigger OpenShift Build&quot; successfully; build &quot;webapp-9&quot; has completed with status:  [Complete].

Starting &quot;Trigger OpenShift Deployment&quot; with deployment config &quot;webapp&quot; from the project &quot;demo-medinvention&quot;.
Operation will timeout after 600000 milliseconds

Exiting &quot;Trigger OpenShift Deployment&quot; successfully; deployment &quot;webapp-13&quot; has completed with status:  [Complete].

Starting &quot;Tag OpenShift Image&quot; with the source [image stream:tag] &quot;webapp:latest&quot; from the project &quot;demo-medinvention&quot; and destination stream(s) &quot;webapp&quot; with tag(s) &quot;prod&quot; from the project &quot;demo-medinvention&quot;.

Exiting &quot;Tag OpenShift Image&quot; successfully.

Starting &quot;Verify OpenShift Deployment&quot; with deployment config &quot;webapp&quot; from the project &quot;demo-medinvention&quot;.
  Waiting on the latest deployment for &quot;webapp&quot; to complete ...
Operation will timeout after 180000 milliseconds


Exiting &quot;Verify OpenShift Deployment&quot; successfully; deployment &quot;webapp-13&quot; has completed with status:  [Complete].

Starting &quot;Scale OpenShift Deployment&quot; with deployment config &quot;memcached&quot; from the project &quot;demo-medinvention&quot;.
 Scaling to &quot;1&quot; replicas and verifying the replica count is reached ...
Operation will timeout after 180000 milliseconds


Exiting &quot;Scale OpenShift Deployment&quot; successfully, where the deployment &quot;memcached-2&quot; reached &quot;1&quot; replica(s).
Finished: SUCCESS</code></pre>
<!--kg-card-end: markdown--><p>Finally,</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.medinvention.dev/content/images/2020/03/Openshift-Example-Result-1-.png" class="kg-image" alt="My Openshift"><figcaption>Target</figcaption></figure><p>Now, we have a <strong><strong>complete environment</strong></strong> with full integration and deployment.<br><br>We can pick out <strong><strong>two Pipelines</strong></strong> :<br></p><ul><li>First is for our Base <strong><strong>Docker image (S2I)</strong></strong> ; as I have set up an <strong><strong>Auto-Build </strong></strong>in <strong><strong>DockerHub</strong></strong> (1) , it's constantly monitors <a href="https://github.com/mmohamed/s2i-ubuntu-php72-apache24.git">the repository</a>  of the image in GitHub and as soon as there is a change, a <strong><strong>Build </strong></strong>is launched in DockerHub and result image is pushed to <strong><strong>Registry</strong></strong>. In this way, with each new <strong><strong>Build </strong></strong>of our project in Openshit, the new image will be used (2) .</li><li>The second Pipeline is for our application; it's our <strong><strong><strong><strong>Jenkins</strong></strong> Job </strong></strong>who will be the orchestrator, because as soon as he detects a change in the <a href="https://github.com/mmohamed/demo.git">repository of the sources</a> of our demo application (5) , he will launch a <strong><strong>new Build </strong></strong>starting with a " <strong><strong>Scale down</strong></strong> " of the <strong><strong>Memcache DeployConfig</strong></strong> (because there is a lack of CPU in project resources), followed by a " <strong><strong>Build webapp</strong></strong> ", then a deployment " <strong><strong>Deploy webapp</strong></strong> ". Then we <strong><strong>Tag </strong></strong>the resulting image of the Build (in the <strong><strong>Openshift Registry</strong></strong>) before checking the deployment. Finally, we finish our Job with a " <strong><strong>Scale Up</strong></strong> " <strong><strong>Memcache DeployConfig</strong></strong> .</li></ul><p><br><br>The functioning of our environment remains simple; when an <strong><strong>HTTP request from the client</strong></strong>(3) come, it will be dispatched by the <strong><strong>Router to webapp service</strong></strong> and therefore to application pod on port 8080 (4) . The application will need to communicate with the <strong><strong>Memcache server</strong></strong>(6) and will have to go through the services, as well as to communicate with <strong><strong>Pod of database</strong></strong>(7).<br><br><br><br></p><blockquote><em>Here is an environment that can be used perfectly to develop and experience Openshift freely.</em><br><em>Personally, I fell in love with this solution and this wonderful compilation of tools. I hope you start to love Openshift.</em></blockquote><p><br><em>NOTA : This post is the result of my experience with Openshift and my own understanding. If you notice omissions or errors do not hesitate to report it to me with a nice comment. </em><br><br><br><br><em>¹ : RedHat definition - ² :I will do another Post for S2I</em></p>]]></content:encoded></item></channel></rss>