Showing posts from 2018

Apache Karaf on Azure cloud

In previous post, I showed the “new” Docker tooling ( ). In this blog post, we will use a Karaf Docker image on Azure cloud. Creating our Karaf Docker image For this post, we will start from a Karaf instance where we install the Karaf REST example. So, on a running Karaf instance, we change etc/org.apache.karaf.features.cfg to add REST example as featuresBoot : ...featuresRepositories = \ mvn:org.apache.karaf.features/enterprise/4.2.1/xml/features, \ mvn:org.apache.karaf.features/spring/4.2.1/xml/features, \ mvn:org.apache.karaf.features/standard/4.2.1/xml/features, \ mvn:org.apache.karaf.features/framework/4.2.1/xml/features, \ mvn:org.apache.karaf.examples/karaf-rest-example-features/4.2.1/xml...featuresBoot = \ instance/4.2.1, \ package/4.2.1, \ log/4.2.1, \ ssh/4.2.1, \ framework/4.2.1, \ system/4.2.1, \ eventadmin/4.2.1, \ feature/4.2.1, \ shell/4.2.1, \ management/4.2.1, \ service/4.2.1, \ jaas/

Apache Karaf & Docker

Apache Karaf 4.2.1 has been released two weeks ago. It’s a major upgrade on the Karaf 4.2.x series, bringing lot of fixes, improvements and new features. Especially: Better support of Java 9, 10 & 11 Sets of examples directly available in the Karaf distribution, allowing you to easily starts with Karaf Better KarafTestSupport  allowing you to easily creates your own integration tests. One of new interesting feature added in Apache Karaf 4.2.1 is the support of Docker. Docker is a great system container platform. Mixing Docker (system container) and Apache Karaf (application container) together gives a great flexibility and very powerful approach for your applications and ecosystem. You decide of the provisioning approach you want to adopt: a static approach using Karaf static profile directly running in Docker a dynamic approach with a regular Karaf distribution running in Docker Apache Karaf 4.2.1 brings two tools around Docker: Build tooling (scripts) allows you to easily create

New Karaf HTTP proxy feature

Apache Karaf 4.2.0 is now on vote, bringing a lot of improvements and new features. One of these new features is the HTTP proxy. The idea of the Karaf HTTP proxy feature is to be able to “expose” a non Karaf running HTTP application into the the Karaf web container. For example, you have a legacy HTTP application running standalone. This application is bound to http://localhost:9999/foo . For consistency, you want Karaf as your main HTTP web container, acting as a gateway/proxy to any other application. The Karaf HTTP proxy feature is for you. Part of the HTTP feature You may know the http feature, installing the Karaf web container. The new proxy feature is part of the http feature. So, you just have to install: karaf@root()> feature:install http You can see now three commands available in addition of the http:list one: http:proxies http:proxy-add http:proxy-remove To illustrate the usage of HTTP proxy, let’s take the example of Karaf WebConsole. Example: proxying Karaf WebConso

Building Angular WebBundle for Apache Karaf

Apache Karaf is a complete applications container, supporting several programming models: OSGi, DS, Blueprint, Spring, … It’s also a complete web application container like Apache Tomcat, but providing some unique feature. If Apache Karaf supports WAR, it also supports Web Bundle. A Web Bundle is basically a bundle with a specific header. For web application frontend, Angular is popular framework (used in combination with Bootstrap). It’s possible to write Angular directly by hand, but, most of the time, web developers prefer to use IDE like WebStorm. In any case, Angular CLI ( is a “classic” tool to test and build your Angular application. Project with Angular CLI Angular CLI allows you to quickly start your Angular project. You can bootstrap using the following command: $ ng new test-frontend Then, Angular CLI ( ng ) creates all required resources: create test-frontend/ (1028 bytes) create test-frontend/.angular-cli.json (1248 bytes)

Implement simple persistent redelivery with backoff mixing Apache Camel & ActiveMQ

When you use Apache Camel routes for your integration, when a failure occurs, a classic pattern is to retry the message. That’s especially true for the recoverable errors: for instance, if you have a network outage, you can replay the messages, hoping we will have a network recovery. In Apache Camel, this redelivery policy is configured in the error handler. The default and dead letter error handlers supports such policy. However, by default, the redelivered exchange is stored in memory, and new exchanges are not coming through since the first redelivered exchange with exception is not flagged as “handled”. This approach could be an issue as if you restart the container hosting the Camel route (like Apache Karaf), the exchange is lost. More other, in term of performance, we might want to still get the exchanges going through. There are several solutions to achieve this. In this blog, I will illustrate a possible implementation of a persistent redelivery policy with backoff support. Apa

Apache Beam: easily implement backoff policy in your DoFn

In Apache Beam, DoFn is your swiss knife: when you don’t have an existing PTransform or CompositeTransform provided by the SDK, you can create your own function. DoFn ? A DoFn applies your logic in each element in the input PCollection and let you populate the elements of an output PCollection . To be included in your pipeline, it’s wrapped in a ParDo PTransform . For instance, you can transform element using a DoFn : pipeline.apply("ReadFromJms","city")) .apply("TransformJmsRecordAsPojo", ParDo.of(new DoFn<JmsRecord, MyCityPojo>() { @ProcessElement public void processElement(ProcessContext c) { String payload = c.element().getPayload(); MyCityPojo city = new MyCityPojo(payload); c.output(city); } }) We can see here the core method of DoFn : processElem