Using Apache Karaf with Kubernetes

In a previous blog post (http://blog.nanthrax.net/?p=893), I introduced the “new” docker support in Karaf and the use of Karaf static distribution.

This post follows the previous one by illustrating how to use Karaf docker image with a Kubernetes cluster.

Preparing Karaf static docker image

As in the previous blog post, we are preparing a Karaf static distribution based docker image.

For this blog post, I’m using the example provided in Karaf: https://github.com/apache/karaf/tree/master/examples/karaf-docker-example/karaf-docker-example-static-dist.

I’m just building the distribution as usual:

karaf-docker-example-static-dist$ mvn clean install

Preparing kubernetes testing cluster

For this blog, I’m using a minikube installed locally.

The installation is pretty simple:

  1. Download minikube from https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
  2. Rename minikube-linux-amd64 as minikube
  3. Copy minikube in your path, for instance /usr/local/bin

Now that minikube is installed on the machine, we can init the testing kubernetes cluster:

$ minikube start minikube v1.2.0 on linux (amd64) Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6 Pulling images ... Launching Kubernetes ...  Verifying: apiserver proxy etcd scheduler controller dns Done! kubectl is now configured to use "minikube"

Now, we can install the command line tooling kubectl on our machine:

$ apt install kubectl

We can now interact with our kubernetes cluster:

$ kubectl cluster-infoKubernetes master is running at https://192.168.99.100:8443KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.$ kubectl get nodesNAME       STATUS   ROLES    AGE     VERSIONminikube   Ready    master   2m49s   v1.15.0

Now, we will use the Docker daemon located in minikube to push our image.

To do so, we retrieve the Docker daemon location using minikube and we set the corresponding env variables:

$ minikube docker-envexport DOCKER_TLS_VERIFY="1"export DOCKER_HOST="tcp://192.168.99.100:2376"export DOCKER_CERT_PATH="/home/jbonofre/.minikube/certs"# Run this command to configure your shell:# eval $(minikube docker-env)$ eval $(minikube docker-env)

It’s possible to access the kubernetes cluster master using minikube ssh:

$ minikube ssh                         _             _                        _         _ ( )           ( )             ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  /' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)$ 

It’s also possible to use the Kubernetes web console dashboard:

$ minikube dashboard Verifying dashboard health ... Launching proxy ... Verifying proxy health ... Opening http://127.0.0.1:45199/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ in your default browser...[4552:4574:0625/142804.920361:ERROR:browser_process_sub_thread.cc(221)] Waited 3 ms for network serviceOpening in existing browser session.

In this blog, I’m only using kubectl command line tool, but you can achieve exactly the same using the Kubernetes web dashboard.

Preparing the docker image

We built the karaf-docker-example-static-dist distribution previously.

Thanks to the karaf-maven-plugin, the karaf:dockerfile goal automatically creates ready to use Dockerfile. Going into the karaf-docker-example-static-dist/target folder, where the Dockerfile is located, we can directly create the docker image in the minikube docker daemon:

$ cd karaf-docker-example-static-dist/target$ docker build -t karaf .Sending build context to Docker daemon  58.12MBStep 1/7 : FROM openjdk:8-jre8-jre: Pulling from library/openjdk6f2f362378c5: Pull complete 494c27a8a6b8: Pull complete 7596bb83081b: Pull complete 1e739bce2743: Pull complete 4dde2a90460d: Pull complete 1f5b8585072c: Pull complete Digest: sha256:ab3c95c9b20a238a2e62201104d54f887da6e231ba1ff1330fae5a29d5b99f5fStatus: Downloaded newer image for openjdk:8-jre ---> ad64853179c1Step 2/7 : ENV KARAF_INSTALL_PATH /opt ---> Running in 77defab3df79Removing intermediate container 77defab3df79 ---> 2abcb151c984Step 3/7 : ENV KARAF_HOME $KARAF_INSTALL_PATH/apache-karaf ---> Running in 90bb0624c34dRemoving intermediate container 90bb0624c34d ---> 1c9da3faa250Step 4/7 : ENV PATH $PATH:$KARAF_HOME/bin ---> Running in 152c96b787f3Removing intermediate container 152c96b787f3 ---> 7574018f8973Step 5/7 : COPY assembly $KARAF_HOME ---> 24c8710f2601Step 6/7 : EXPOSE 8101 1099 44444 8181 ---> Running in 280795e8e7b6Removing intermediate container 280795e8e7b6 ---> 557edaa00ad9Step 7/7 : CMD ["karaf", "run"] ---> Running in eeed1d42ee76Removing intermediate container eeed1d42ee76 ---> 945b344ccf12Successfully built 945b344ccf12Successfully tagged karaf:latest

We now have a docker image ready in the minikube docker daemon:

$ docker imagesREPOSITORY                                TAG                 IMAGE ID            CREATED             SIZEkaraf                                     latest              945b344ccf12        49 seconds ago      267MBk8s.gcr.io/kube-proxy                     v1.15.0             d235b23c3570        5 days ago          82.4MBk8s.gcr.io/kube-apiserver                 v1.15.0             201c7a840312        5 days ago          207MBk8s.gcr.io/kube-scheduler                 v1.15.0             2d3813851e87        5 days ago          81.1MBk8s.gcr.io/kube-controller-manager        v1.15.0             8328bb49b652        5 days ago          159MBopenjdk                                   8-jre               ad64853179c1        2 weeks ago         246MBk8s.gcr.io/kube-addon-manager             v9.0                119701e77cbc        5 months ago        83.1MBk8s.gcr.io/coredns                        1.3.1               eb516548c180        5 months ago        40.3MBk8s.gcr.io/kubernetes-dashboard-amd64     v1.10.1             f9aed6605b81        6 months ago        122MBk8s.gcr.io/etcd                           3.3.10              2c4adeb21b4f        6 months ago        258MBk8s.gcr.io/k8s-dns-sidecar-amd64          1.14.13             4b2e93f0133d        9 months ago        42.9MBk8s.gcr.io/k8s-dns-kube-dns-amd64         1.14.13             55a3c5209c5e        9 months ago        51.2MBk8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64    1.14.13             6dc8ef8287d3        9 months ago        41.4MBk8s.gcr.io/pause                          3.1                 da86e6ba6ca1        18 months ago       742kBgcr.io/k8s-minikube/storage-provisioner   v1.8.1              4689081edb10        19 months ago       80.8MB

As we don’t want to check new image pulling, we tag our karaf image with a version (1.0.0 in our case):

$ docker tag karaf:latest karaf:1.0.0$ docker imagesREPOSITORY                                TAG                 IMAGE ID            CREATED             SIZEkaraf                                     1.0.0               945b344ccf12        4 minutes ago       267MBkaraf                                     latest              945b344ccf12        4 minutes ago       267MB...

Creating deployment and pod in kubernetes

We can now create a pod with our karaf image in the kubernetes cluster, using kubectl run command:

$ kubectl run karaf --image=karaf:1.0.0 --port=8181kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.deployment.apps/karaf created

Note that we define the port number of our Karaf distribution where the example servlet is bound.

We can verify the status of our deployment and associated pods:

$ kubectl get deploymentsNAME    READY   UP-TO-DATE   AVAILABLE   AGEkaraf   1/1     1            1           44s$ kubectl get pods -o wideNAME                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATESkaraf-cc9c6bd5d-6wbzk   1/1     Running   0          57s   172.17.0.5   minikube   <none>           <none>

We can see our karaf pod running in the minikube node.

We can have more details about our pod:

$ kubectl describe podsName:           karaf-cc9c6bd5d-6wbzkNamespace:      defaultPriority:       0Node:           minikube/10.0.2.15Start Time:     Tue, 25 Jun 2019 14:52:28 +0200Labels:         pod-template-hash=cc9c6bd5d                run=karafAnnotations:    <none>Status:         RunningIP:             172.17.0.5Controlled By:  ReplicaSet/karaf-cc9c6bd5dContainers:  karaf:    Container ID:   docker://0e540e099d88245b690fec69ebaa486ea5faa91813497ed6f9bc2b70af8188bd    Image:          karaf:1.0.0    Image ID:       docker://sha256:945b344ccf12b7a1edf9319d3db1041f1d90b62a65e07f0da5f402da17aef5dd    Port:           8181/TCP    Host Port:      0/TCP    State:          Running      Started:      Tue, 25 Jun 2019 14:52:29 +0200    Ready:          True    Restart Count:  0    Environment:    <none>    Mounts:      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bl7ks (ro)Conditions:  Type              Status  Initialized       True   Ready             True   ContainersReady   True   PodScheduled      True Volumes:  default-token-bl7ks:    Type:        Secret (a volume populated by a Secret)    SecretName:  default-token-bl7ks    Optional:    falseQoS Class:       BestEffortNode-Selectors:  <none>Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s                 node.kubernetes.io/unreachable:NoExecute for 300sEvents:  Type    Reason     Age   From               Message  ----    ------     ----  ----               -------  Normal  Scheduled  12m   default-scheduler  Successfully assigned default/karaf-cc9c6bd5d-6wbzk to minikube  Normal  Pulled     12m   kubelet, minikube  Container image "karaf:1.0.0" already present on machine  Normal  Created    12m   kubelet, minikube  Created container karaf  Normal  Started    12m   kubelet, minikube  Started container karaf

It’s also possible to directly see the pod logs:

$ kubectl logs karaf-cc9c6bd5d-6wbzkkaraf: Ignoring predefined value for KARAF_HOMEJun 25, 2019 12:52:29 PM org.apache.karaf.main.Main launchINFO: Installing and starting initial bundlesJun 25, 2019 12:52:29 PM org.apache.karaf.main.Main launchINFO: All initial bundles installed and set to startJun 25, 2019 12:52:29 PM org.apache.karaf.main.Main$KarafLockCallback lockAcquiredINFO: Lock acquired. Setting startlevel to 10012:52:30.814 INFO  [FelixStartLevel] Logging initialized @1571ms to org.eclipse.jetty.util.log.Slf4jLog12:52:30.837 INFO  [FelixStartLevel] EventAdmin support is not available, no servlet events will be posted!12:52:30.839 INFO  [FelixStartLevel] LogService support enabled, log events will be created.12:52:30.842 INFO  [FelixStartLevel] Pax Web started12:52:31.151 INFO  [paxweb-config-1-thread-1] No ALPN class available12:52:31.152 INFO  [paxweb-config-1-thread-1] HTTP/2 not available, creating standard ServerConnector for Http12:52:31.189 INFO  [paxweb-config-1-thread-1] Pax Web available at [0.0.0.0]:[8181]12:52:31.201 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.apache.karaf.http.core [15]] to http service12:52:31.217 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard [47]] to http service12:52:31.222 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.apache.karaf.examples.karaf-docker-example-app [14]] to http service12:52:31.251 INFO  [paxweb-config-1-thread-1] will add org.apache.jasper.servlet.JasperInitializer to ServletContainerInitializers12:52:31.251 INFO  [paxweb-config-1-thread-1] Skipt org.apache.jasper.servlet.JasperInitializer, because specialized handler will be present12:52:31.252 INFO  [paxweb-config-1-thread-1] will add org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer to ServletContainerInitializers12:52:31.322 INFO  [paxweb-config-1-thread-1] added ServletContainerInitializer: org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer12:52:31.323 INFO  [paxweb-config-1-thread-1] will add org.eclipse.jetty.websocket.server.NativeWebSocketServletContainerInitializer to ServletContainerInitializers12:52:31.323 INFO  [paxweb-config-1-thread-1] added ServletContainerInitializer: org.eclipse.jetty.websocket.server.NativeWebSocketServletContainerInitializer12:52:31.366 INFO  [paxweb-config-1-thread-1] registering context DefaultHttpContext [bundle=org.apache.karaf.examples.karaf-docker-example-app [14], contextID=default], with context-name: 12:52:31.387 INFO  [paxweb-config-1-thread-1] registering JasperInitializer12:52:31.446 INFO  [paxweb-config-1-thread-1] No DecoratedObjectFactory provided, using new org.eclipse.jetty.util.DecoratedObjectFactory[decorators=1]12:52:31.559 INFO  [paxweb-config-1-thread-1] DefaultSessionIdManager workerName=node012:52:31.559 INFO  [paxweb-config-1-thread-1] No SessionScavenger set, using defaults12:52:31.562 INFO  [paxweb-config-1-thread-1] node0 Scavenging every 600000ms12:52:31.580 INFO  [paxweb-config-1-thread-1] Started HttpServiceContext{httpContext=DefaultHttpContext [bundle=org.apache.karaf.examples.karaf-docker-example-app [14], contextID=default]}12:52:31.587 INFO  [paxweb-config-1-thread-1] jetty-9.4.18.v20190429; built: 2019-04-29T20:42:08.989Z; git: e1bc35120a6617ee3df052294e433f3a25ce7097; jvm 1.8.0_212-b0412:52:31.631 INFO  [paxweb-config-1-thread-1] Started default@4438a8f7{HTTP/1.1,[http/1.1]}{0.0.0.0:8181}12:52:31.632 INFO  [paxweb-config-1-thread-1] Started @2393ms

Exposing the service

Now, we want to expose the servlet example running in karaf, to be accessible from “outside” of the kubernetes cluster.

We expose our karaf deployment as a service, using the 8181 port number:

$ kubectl expose deployment/karaf --type="NodePort" --port=8181service/karaf exposed

We now have a service available, directly accessible:

$ kubectl get servicesNAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGEkaraf        NodePort    10.104.119.216   <none>        8181:31673/TCP   56skubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          52m

We can see the karaf service.

Now, we can get the “external/public” available URL on minikube:

$ minikube service --url karafhttp://192.168.99.100:31673

It means that we can point our browser to http://192.168.99.100:31673/servlet-example:

If we display the pod logs, we can see the client access:

$ kubectl logs karaf-cc9c6bd5d-6wbzkkaraf: Ignoring predefined value for KARAF_HOMEJun 25, 2019 12:52:29 PM org.apache.karaf.main.Main launchINFO: Installing and starting initial bundlesJun 25, 2019 12:52:29 PM org.apache.karaf.main.Main launchINFO: All initial bundles installed and set to startJun 25, 2019 12:52:29 PM org.apache.karaf.main.Main$KarafLockCallback lockAcquiredINFO: Lock acquired. Setting startlevel to 10012:52:30.814 INFO  [FelixStartLevel] Logging initialized @1571ms to org.eclipse.jetty.util.log.Slf4jLog12:52:30.837 INFO  [FelixStartLevel] EventAdmin support is not available, no servlet events will be posted!12:52:30.839 INFO  [FelixStartLevel] LogService support enabled, log events will be created.12:52:30.842 INFO  [FelixStartLevel] Pax Web started12:52:31.151 INFO  [paxweb-config-1-thread-1] No ALPN class available12:52:31.152 INFO  [paxweb-config-1-thread-1] HTTP/2 not available, creating standard ServerConnector for Http12:52:31.189 INFO  [paxweb-config-1-thread-1] Pax Web available at [0.0.0.0]:[8181]12:52:31.201 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.apache.karaf.http.core [15]] to http service12:52:31.217 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.ops4j.pax.web.pax-web-extender-whiteboard [47]] to http service12:52:31.222 INFO  [paxweb-config-1-thread-1] Binding bundle: [org.apache.karaf.examples.karaf-docker-example-app [14]] to http service12:52:31.251 INFO  [paxweb-config-1-thread-1] will add org.apache.jasper.servlet.JasperInitializer to ServletContainerInitializers12:52:31.251 INFO  [paxweb-config-1-thread-1] Skipt org.apache.jasper.servlet.JasperInitializer, because specialized handler will be present12:52:31.252 INFO  [paxweb-config-1-thread-1] will add org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer to ServletContainerInitializers12:52:31.322 INFO  [paxweb-config-1-thread-1] added ServletContainerInitializer: org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer12:52:31.323 INFO  [paxweb-config-1-thread-1] will add org.eclipse.jetty.websocket.server.NativeWebSocketServletContainerInitializer to ServletContainerInitializers12:52:31.323 INFO  [paxweb-config-1-thread-1] added ServletContainerInitializer: org.eclipse.jetty.websocket.server.NativeWebSocketServletContainerInitializer12:52:31.366 INFO  [paxweb-config-1-thread-1] registering context DefaultHttpContext [bundle=org.apache.karaf.examples.karaf-docker-example-app [14], contextID=default], with context-name: 12:52:31.387 INFO  [paxweb-config-1-thread-1] registering JasperInitializer12:52:31.446 INFO  [paxweb-config-1-thread-1] No DecoratedObjectFactory provided, using new org.eclipse.jetty.util.DecoratedObjectFactory[decorators=1]12:52:31.559 INFO  [paxweb-config-1-thread-1] DefaultSessionIdManager workerName=node012:52:31.559 INFO  [paxweb-config-1-thread-1] No SessionScavenger set, using defaults12:52:31.562 INFO  [paxweb-config-1-thread-1] node0 Scavenging every 600000ms12:52:31.580 INFO  [paxweb-config-1-thread-1] Started HttpServiceContext{httpContext=DefaultHttpContext [bundle=org.apache.karaf.examples.karaf-docker-example-app [14], contextID=default]}12:52:31.587 INFO  [paxweb-config-1-thread-1] jetty-9.4.18.v20190429; built: 2019-04-29T20:42:08.989Z; git: e1bc35120a6617ee3df052294e433f3a25ce7097; jvm 1.8.0_212-b0412:52:31.631 INFO  [paxweb-config-1-thread-1] Started default@4438a8f7{HTTP/1.1,[http/1.1]}{0.0.0.0:8181}12:52:31.632 INFO  [paxweb-config-1-thread-1] Started @2393ms13:13:52.506 INFO  [qtp1230440715-30] Client 172.17.0.1 request received on http://192.168.99.100:31673/servlet-example

Note the latest log message.

Scaling

Using a concrete kubernetes cluster

minikube is great to local test, but it’s a “limited” kubernetes cluster (it’s not so easy to add new nodes).

Docker registry

In order to share my docker image with the nodes of my Kubernetes cluster, I’m creating a local Docker registry:

$ docker fetch registry:2$ docker run -d -p 5000:5000 --name registry registry:2

Then, I tag my Karaf docker image and push in my fresh registry:

$ docker tag karaf:1.0.0 localhost:5000/karaf:1.0.0$ docker push localhost:5000/karaf:1.0.0

VirtualBox VMs with Kubernetes

To illustrate a more concrete kubernetes cluster, we will create couple of Ubuntu VMs in VirtualBox.
The only setup to take care is about the network. To simplify I’m using a “Bridge Network Adapter”, meaning that the VMs will appear on my local network as other machines.

I install a regular Ubuntu OS on each VM.

On each machine, I install some tools that I gonna use:

$ sudo apt install apt-transport-https curl docker.io

As we are going to use our new insecure Docker registry, I add this one in /etc/docker/daemon.json with the IP address of my machine:

{  "insecure-registries": ["192.168.134.110:5000"]}

Then, I install Kubernetes on the VMs:

$ sudo su -# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -# cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb http://apt.kubernetes.io/ kubernetes-xenial mainEOF# apt update# apt install kubelet kubeadm kubectl

Starting cluster master

Now, we can create the Kubernetes cluster master on the first VM:

$ sudo su -# kubeadm init...kubeadm join 192.168.134.90:6443 --token alpia1.8cjc1yfv5ezganq7 --discovery-token-ca-cert-hash sha256:3f2da2fa1967b8e974b9097fcdd15c66e0d136db5b1f08b3db7fe45c3e2b790b

With my regular user, I init kubectl configuration:

$ mkdir .kube$ cd .kube$ sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config$ sudo chown jbonofre:jbonofre ~/.kube/config

Now, I can define the node as the master:

$ kubectl taint nodes --all node-role.kubernetes.io/master-

We have one node ready:

$ kubectl get nodesNAME   STATUS    ROLES    AGE   VERSIONnode   Ready     master   2m    v1.15.0

Starting cluster worker

On the second VM, we join the Kubernetes cluster with the command provided with kubeadm init:

$ sudo su -# kubeadm join 192.168.134.90:6443 --token alpia1.8cjc1yfv5ezganq7 --discovery-token-ca-cert-hash sha256:3f2da2fa1967b8e974b9097fcdd15c66e0d136db5b1f08b3db7fe45c3e2b790b

We now have two nodes:

$ kubectl get nodesNAME   STATUS    ROLES    AGE   VERSIONnode2  Ready     <none>   2m    v1.15.0node   Ready     master   5m    v1.15.0

We can now scale our deployment πŸ˜‰

Deployment and scaling

Let’s deploy our Karaf as we did before:

$ kubectl run karaf --image=192.168.134.110:5000/karaf:1.0.0 --port=8181

Then, we have Karaf running on one nodes:

$ kubectl get deploymentsNAME     READY     UP-TO-DATE      AVAILABLE       AGEkaraf    1/1       1               1               32s

We can see the pods where it’s running:

$ kubectl get pods -o wideNAME              READY           STATUS          RESTARTS         AGE          IP                    NODE         NOMINATED NODE             READINESS GATESkaraf-68cb45h     1/1             Running         0                106s         192.168.167.130       node1        <none>                     <none>

So Karaf is running on node1. Now, let’s scale the deployment to use the two nodes:

$ kubectl scale deployments/karaf --replicas=2

Now, we can see that our deployment scale to two nodes:

NAME     READY     UP-TO-DATE      AVAILABLE       AGEkaraf    2/2       2               2               4m34s

We can see the pods on the two nodes:

$ kubectl get pods -o wideNAME              READY           STATUS          RESTARTS         AGE          IP                    NODE         NOMINATED NODE             READINESS GATESkaraf-68cb45x     1/1             Running         0                74s          192.168.104.8         node2        <node>                     <none>karaf-68cb45h     1/1             Running         0                5m30s        192.168.167.130       node1        <none>                     <none>

It’s also possible to scale down:

$ kubectl scale deployments/karaf --replicas=1$ kubectl get deploymentsNAME     READY     UP-TO-DATE      AVAILABLE       AGEkaraf    1/1       1               1               8m40s$ kubectl get pods -o wideNAME              READY           STATUS          RESTARTS         AGE          IP                    NODE         NOMINATED NODE             READINESS GATESkaraf-68cb45x     1/1             Terminating     0                4m21         192.168.104.8         node2        <node>                     <none>karaf-68cb45h     1/1             Running         0                8m45s        192.168.167.130       node1        <none>                     <none>$ kubectl get pods -o wideNAME              READY           STATUS          RESTARTS         AGE          IP                    NODE         NOMINATED NODE             READINESS GATESkaraf-68cb45h     1/1             Running         0                9m50s        192.168.167.130       node1        <none>                     <none>

Summary

Following the previous blog post about Karaf with Docker, we can see in this blog that Karaf is fully ready to run with Kubernetes.

As part of the “kloud initiative” (Karaf for the Cloud), I’m preparing some tooling directly in Karaf to even simplify the use of Kubernetes. There are also some improvements coming in Karaf Cellar to better leverage Kubernetes.

In the mean time, you can already use Karaf with Kubernetes in your datacenter or with your cloud provider.

Enjoy !

Comments

Popular posts from this blog

Exposing Apache Karaf configurations with Apache Arrow Flight

Getting started with Apache Karaf Minho