What's new in Apache Karaf Decanter 2.8.0?
Apache Karaf Decanter 2.8.0 has just been released. This release includes several fixes, improvements, and dependency updates.
I encourage all Decanter users to update to 2.8.0 release ;)
In this blog post, I will highlight some important changes and fixes we did in this release.
The Prometheus appender has been improved to expose more gauges.
As reminder, the Decanter Prometheus appender is basically a servlet that expose Prometheus compliant data, that Prometheus instances can poll to get the latest updated metrics.
Prometheus appender only looking for numeric data (coming from the Decanter collectors) to create and expose gauges.
Unfortunately, in previous Decanter releases, Prometheus appender only looking for numeric value for "first level" properties. It means if a collected data property value was a
Map, no inner data was considered by Prometheus appender, even if inner values were numeric.
That's the first improvement we did in Prometheus appender: if the property value is a
Map, the appender goes in the map values, looking for numeric type, and creating corresponding gauge.
Related to that, to avoid "confusion", the gauge name is now built (in the case of map) with collected data property key followed by the map entry key.
Another improvement we did on the Prometheus appender is to allow you to "select" the collected data you want to "expose" in the Prometheus servlet. By default, any numeric collected data are "rendered" by the Promethus appender. If you want to specify the properties you want to include in the Prometheus appender, you can add the selected properties in
etc/org.apache.karaf.decanter.appender.prometheus.cfg by prefixing the selected properties with
prometheus.key. and using
true as value, like for instance:
InfluxDB appender fix
The InfluxDB appender contained a mistake in the OSGi heeaders definition, causing a
ClassNotFoundException at startup.
It's now fixed on Decanter 2.8.0 and you can install the InfluxDB appender without problem.
Warning message on the socket collector bounded stream
In order to avoid huge memory consumption or DoS attack, the socket collector uses a bounded input stream, limiting the sent request size.
The bounded input stream limit can be configured via the
max.request.size property in
etc/org.apache.karaf.decanter.collector.socket.cfg, for instance:
You can use
-1 for unbounded input stream.
Before Decanter 2.8.0, if the received request data was larger then the socket collector max request size, then the collector "truncated" the data without any warning messages. Some users were surprised to have some missing properties due to that without anything in the log.
In Decanter 2.8.0, the socket collector now log a warning message if the received request is larger. In that case, you could see this kind of log message:
[WARN] Reach socket read input stream limit
Topic name definition in all collectors
You were already able to define the Decanter dispatcher topics name where the appenders are listening, thanks to the
event.topics property in the appender configuration file. For instance, you can specify the topic name where the elasticsearch appender is listening using this in
etc/org.apache.karaf.decanter.appender.elasticsearch.cfg configuration file:
Thanks to that, the appenders are able to listen only some kind of events, or events coming from some collectors.
However, not all collectors allowed you to specify the topics where the collector is sending collected data. Decanter 2.8.0 now fixes that. You can also use
event.topics property in any collector configuration file to specify the topic where the collector will send the collected data. For instance, for the JMX collector, you can now add:
event.topics is optional, meaning that the collector send to the same default topic as before.
As usual, as in any Karaf release, Decanter 2.8.0 brings a bunch of dependency updates, especially:
- velocity 2.3
- commons-io 2.11
- johnzon 1.2.14
- dropwizard 1.2.3
- jetty 9.4.43.v20210629
- oshi 5.8.2
- redisson 3.16.2
- CXF 3.4.4
- Camel 3.11.1
- hadoop-client 3.3.1
- kafka 2.8.0
- orientdb-client 3.2.0
- mongodb java driver 3.12.10
- prometheus 0.11.0
- lucene 8.9.0
- aws-java-sdk 1.12.62