Posts

Showing posts from 2020

Coming in Apache Karaf 4.3.1: features JSON

The next Apache Karaf release (4.3.1) will support features definition in JSON format. Up to now, only XML was supported. It’s now possible to use either XML or JSON format. For instance, the following features repository using XML format: <features name="my-features-repo" xmlns="http://karaf.apache.org/xmlns/features/v1.4.0"> <feature name="my-feature" version="1.0-SNAPSHOT"> <feature>scheduler</feature> <bundle>mvn:org.example.bundle/my-bundle/1.0-SNAPSHOT</bundle> </feature></features> can now be defined using JSON format: { "name": "my-features-repo", "feature": [ { "name": "my-feature", "version": "1.0-SNAPSHOT", "feature": [ { "name": "scheduler" } ] "bundle": [ { "location": "mvn:org.example.bundle/my-bundle/1.0-SNAPSH

What's new in Apache Karaf 4.3.0

Apache Karaf 4.3.0 is an important milestone for the project and is the new main releases cycle. This release brings a lot of improvements, fixes, and new features. Winegrower This is not directly related to Apache Karaf 4.3.0, and I will do a blog specifically about Winegrower. However, Winegrower is a new great asset in the Karaf ecosystem. It provides a very elegant programming model with a simple/flat classloader model. It’s a concrete alternative to Spring Boot: full support of OSGi R7 annotations and programming model without the classloader complexity. It means that you can use your existing bundles in Winegrower. As Winegrower use an unique classloader, you can directly use pure jar (not necessary bundles) without any MANIFEST header the cepages is equivalent of spring boot starter, allowing to add turnkey features that you can use in your application extensions I will publish a blog about Winegrower soon. BoM To simplify the dependencies management when you create your runtime

Testing custom Apache Karaf distributions

Apache Karaf provides KarafTestSupport to easily implement test on Karaf runtime. Karaf itself uses it to test most of Karaf services in the build (including Jenkins CI). This approach doesn’t only work for Karaf “vanilla” distribution, but also for any custom distributions based on Karaf. Custom distribution is available via Maven URL To illustrate, I’m implementing a test for Apache Unommi. Apache Unomi is a actually a custom Karaf distribution. It’s available on Maven Central. For this blog, I gonna test Unomi 1.5.1 release: https://repo1.maven.org/maven2/org/apache/unomi/unomi/1.5.1/ . Let’s start with the pom.xml . It basically contains: Karaf itest common and pax exam dependencies Unomi distribution we want to test Here’s the pom.xml : <?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.

Apache Karaf and log4j2 appenders

If Karaf default logging service matches most of users need, I had questions about how to add new or custom appenders. Karaf and Pax Logging The first thing to remember is that Apache Karaf doesn’t directly use log4j or slf4j or any direct logging framework. In order to provide maximum flexibility with unique configuration, Karaf uses Pax Logging. Pax Logging “abstracts” bunch of logging frameworks. The purpose is to allow the developer to use the logging framework he wants and the devops doesn’t care about the logging framework, he just know the “central” and unique pax-logging configuration, dealing with the concrete logging framework. This approach is very convenient and flexible, but he has a minor drawback: adding new “modules” (appenders, layout, etc) has to be done at Pax Logging level. Fortunately, it’s easy 😉 Pax Logging extra On Apache Karaf 4.2.x and 4.3.x, we are using pax-logging-log4j2 service implementation. Basically Pax Logging is splitted in two parts: Pax Logging AP

New collectors in Apache Karaf Decanter 2.4.0

Apache Karaf Decanter 2.4.0 will be released soon and include a set of new collectors. Oshi Collector The oshi collector harvest a bunch of data about the hardware and the operating system. It’s a scheduled collector, executed periodically (every minute by default). You can have all details about the machine thanks to this collector: motherboard, cpu, sensors, disks, etc, etc. By default, the oshi collector retrieves all details, but you can filter what you want to harvest in the etc/org.apache.karaf.decanter.collector.oshi.cfg configuration file. It means now we have the system collector that allows you to periodically execute scripts and shell commands, and the oshi collector that harvests all details about the system. ConfigAdmin Collector The ConfigAdmin collector is a event driven collector. It “listens” for any change on the Karaf configuration and send an event for each change. Prometheus Collector Karaf Decanter 2.3.0 introduced the Prometheus appender to expose metrics on a P

Apache Karaf Decanter 2.4.0, new processing layer

Up to Apache Karaf Decanter 2.3.0, the collection workflow was pretty simple: collect and append. In Karaf Decanter 2.4.0, we introduced a new optional layer in between: the processing. It means that the workflow can be now collect, process, and append. A processor get data from the collectors and do a logic processing before sending the event in the Decanter dispatcher in destination of the appenders. The purpose is to be able to apply any kind of processing before storing/sending the collected data. To use and enable this workflow, you just have to install a processor and change the appenders to listen data from the processor. Example of aggregate processor A first processor is available in Karaf Decanter 2.4.0: the timed aggregator. By default, each data collected by collectors is sent directly to the appenders. For instance, it means that the JMX collectors will send one event per MBean every minute by default. If the appender used is a REST appender, it means that we will call the

Apache CXF metrics with Apache Karaf Decanter

Image
Recently, I had the question several times: how can I have metrics (number of requests, request time, …) of the SOAP and REST services deployed in Apache Karaf or Apache Unomi (also running on Karaf). SOAP and REST services are often implemented with Apache CXF (either directly using CXF or using Aries JAXRS whiteboard which uses CXF behind the hood). Apache Karaf provides examples how to deploy SOAP/REST services, using different approaches (depending the one you prefer): https://github.com/apache/karaf/tree/master/examples/karaf-soap-example https://github.com/apache/karaf/tree/master/examples/karaf-rest-example CXF Bus Metrics feature Apache CXF provides a metrics feature that collect the metrics we need. Behind the hood it uses dropwizard library and the metrics are exposed as JMX MBeans thanks to the JmxExporter . Let’s take a simple REST service. For this example, I’m using blueprint, but it also works with CXF programmatically or using SCR. I have a very simple JAXRS class looki

Apache Karaf Decanter 2.3.0, new Prometheus appender

Image
As said in my previous post, Apache Karaf Decanter 2.3.0 is a major new release bringing fixes, improvements and new features. We saw the new alerting service. In this blog post, we see another new feature: the Prometheus Appender. Prometheus ? Prometheus ( https://prometheus.io/ ) is a popular metrics toolkit, especially for cloud ecosystem. It’s open-source and it’s part of the Cloud Native Computing Foundation. If Karaf Decanter provides similar collecting and alerting features, it makes sense to use Decanter as collector that Prometheus can request. Then, the visualization and search can be performed on Prometheus. Decanter Prometheus Appender The preferred approach with Prometheus is to “expose” a HTTP endpoint providing metrics that the Prometheus platform can retrieve. It’s what the Decanter Prometheus Appender is doing: bind a Prometheus servlet that Prometheus can “poll” get the incoming data from the Decanter Collectors detects the numbers in the event data creates Prometheus

Apache Karaf Decanter 2.3.0, the new alerting service

Image
Apache Karaf Decanter 2.3.0 will be released soon. This release brings lot of fixes, improvements and new features. In this blog post, we will focus on one major refactoring done in this version: the alerting service. Goodbye checker, welcome alerting service Before Karaf Decanter 2.3.0, the alert rules where defined in a configuration file named etc/org.apache.karaf.decanter.alerting.checker.cfg . The configuration were simple and looked like. For instance: message.warn=match:.*foobar.* But the checker has three limitations: it’s not possible to define a check on several attributes at same time. For instance, it’s not possible to have a rule with something like if message == 'foo' and other == 'bar' . it’s not possible to have “time scoped” rule. For instance, I want to have an alert only if a counter is great than a value for x minutes. A bit related to previous point, the recoverable alerts are not perfect in the checker. It should be a configuration of the alert rul

Apache ActiveMQ 5.15.12 performance improvement on JDBC persistence adapter

Some weeks ago, I identified a performance issue on ActiveMQ (affecting up to 5.15.11 release) when using PostgreSQL as JDBC persistence adapter. For instance, JDBC persistence adapter is an alternative to KahaBD and configured in activemq.xml as follow: <broker ...> ... <persistenceAdapter> <jdbcPersistenceAdapter dataSource="#postgres-ds"/> </persistenceAdapter> ... </broker> ... <bean id="postgres-ds" class="org.postgresql.ds.PGPoolingDataSource"> <property name="url" value="jdbc:postgresql://192.168.99.100:5432/activemq"/> <property name="user" value="activemq"/> <property name="password" value="activemq"/> <property name="initialConnections" value="1"/> <property name="maxConnections" value="10"/> </bean> ... The problem occurs when we have lot of pending message