Posts

Showing posts from 2013

Coming in Karaf 3.0.0: new enterprise JPA (OpenJPA, Hibernate) and CDI (OpenWebBeans, JBoss Weld) features

Apache Karaf 3.0.0 is now mostly ready (I’m just polishing the documentation). In previous post, I introduced new enterprise features like JNDI, JDBC, JMS. As I said, the purpose is to provide a full flexible enterprise ready container, easy to use and extend for the users. Easy to use means that a simple command will extend your container, with feature that can help you a lot. JPA Previous Karaf version already provided a jpa feature. However, this feature “only” installs the Aries JPA bundles, allowing to expose the EntityManager as an OSGi service. It doesn’t install any JPA engine. It means that, previously, the users had to install all bundles required to have a persistence engine. As very popular persistence engines, Karaf 3.0.0 provides two ready-to-use features: karaf@root()> feature:install openjpa The openjpa feature brings Apache OpenJPA in Apache Karaf. karaf@root()> feature:install hibernate The hibernate feature brings Hibernate in Apache Karaf. CDI Karaf 3.0.0 n

Coming in Karaf 3.0.0: new enterprise JMS feature

In my previous post, I introduced the new enterprise JDBC feature. To follow the same purpose, we introduced the new enterprise JMS feature. JMS feature Like the JDBC feature, the JMS feature is an optional one. It means that you have to install it first: karaf@root()> feature:install jms The jms feature installs the JMS service which is mostly a JMS “client”. It doesn’t install any broker. For the rest of this post, I’m using a ActiveMQ embedded in my Karaf: karaf@root()> feature:repo-add activemq 5.9.0karaf@root()> feature:install activemq-broker Like the JDBC feature, the JMS feature provides: an OSGi service jms:* commands a JMX JMS MBean The OSGi service provides a set of operation to create JMS connection factories, send JMS messages, browse a JMS queue, etc. The commands and MBean manipulate the OSGi service. Commands The jms:create command allows you to create a JMS connection factory. This command automatically creates a connectionfactory-[name].xml blueprint file

Coming in Karaf 3.0.0: new enterprise JDBC feature

Some weeks (months ;)) ago, my colleague Christian (Schneider) did a good job by creating some useful commands to manipulate databases directly in Karaf. We discussed together where to put those commands. We decided to submit a patch at ServiceMix because we didn’t really think about Karaf 3.0.0 at that time. Finally, I decided to refactore those commands as a even more “useful” Karaf feature and prepare it for Karaf 3.0.0. JDBC feature By refactoring, I mean that it’s no more only commands: I did a complete JDBC features, providing a OSGi service, a set of commands and a MBean. The different modules are provided by the jdbc feature. Like most of other enterprise features, the jdbc feature is not installed by default. To enable it, you have to install the jdbc feature first: karaf@root()> feature:install jdbc This feature provides: a JdbcService OSGi service a set of jdbc:* commands a JDBC MBean (org.apache.karaf:type=jdbc,name=*) The OSGi service provides a set of operation to

Coming in Karaf 3.0.0: new enterprise JNDI feature

In previous Karaf version (2.x), the JNDI support was “basic”. We just leveraged Aries JNDI to support the osgi:service JNDI scheme to reference the OSGi services using JNDI name. However, we didn’t provide a fully functionnal JNDI initial context, nor any tooling around JNDI. In part of the new enterprise features coming with Karaf 3.0.0, the JNDI support is now more “complete”. Add JNDI support As most of the other enterprise features, the JNDI feature is an optional one. It means that you have to install the jndi feature first: karaf@root()> feature:install jndi The jndi feature installs several parts. Ready to use initial context Like in previous version, Karaf provides a fully compliant implementation of the OSGi Alliance JNDI Service Specification. This specification details how to advertise InitialContextFactory and ObjectFactories in an OSGi environment. It also defines how to obtain services from services registry via JNDI. Now, it’s possible to use directly the JNDI init

Coming in Karaf 3.0.0: RBAC support for OSGi services and console commands

In a previous post, we saw a new Karaf feature: support of user groups and Role-Based Access Controle (RBAC) for the JMX layer . We extended the RBAC support to the OSGi services, and by side effect to the console commands (as a console command is also an OSGi service). RBAC for OSGi services The JMX RBAC support uses a MBeanServerBuilder . The KarafMBeanServerBuilder “intercepts” the call to the MBeans, checks the definition (defined in etc/jmx.acl.*.cfg configuration files) and defines if the call can be performed or not. Regarding the RBAC support for OSGi services, we use a similar mechanism. The Karaf Service Guard provides a service listener which intercepts the service calls, and check if the call to the service can be performed or not. The list of “secured” OSGi service is defined in the karaf.secured.services property in the etc/system.properties (using a LDAP syntax filter). By default, we only “intercept” (and so secure) the command OSGi services: karaf.secured.services

Some books review: Instant Apache Camel Messaging System,Learning Apache Karaf, and Instant Apache ServiceMix How-To

I’m pleased to be reviewer on new books published by Packt: Instant Apache Camel Messaging System Learning Apache Karaf Instant Apache ServiceMix How-To I received a “hard” copy from Packt (thanks for that), and I’m now able to do the review. Instant Apache Camel Messaging System, by Evgeniy Sharapov. Published by Packt publishing in September 2013 This book is a good introduction to Camel. It covers Camel fundamentals. What is Apache Camel It’s a quick introduction about Camel, in only four pages. We have a good overview about Camel basics: what is a component, routes, contexts, EIPs, etc. We have to see that as it is: it’s just a quick introduction. Don’t expect a lot of details about the Camel basics, it just provides a very high level overview. Installation To be honest, I don’t like this part. It focus mostly on using Maven with Camel: how to use Camel with Maven, integrate Camel in your IDE (Eclipse, or IntelliJ), usage of the archetypes. I think it’s too much restrictive. I woul

Talend ESB Continous Integration, part2: Maven and commandline

In the first part of the “Talend ESB Continuous Integration” serie, we saw how to test the Camel routes created by the studio, by leveraging Camel Test Kit. We saw how to have automatic testing using Jenkins. The Maven POM that we did assumes that the route has been deployed (on the local repository or on a remote repository like Apache Archiva). But, it’s not so elegant that a Studio directly publish to the Archiva repository, especially from a continuous integration perspective. In this second article, I will show how to use the Talend commandline with Maven, and do nightly builds using Jenkins. Talend CommandLine CommandLine introduction The Talend commandline is the Talend Studio without the GUI. Thanks to the commandline, you can do a lot of actions, like checkout, export route, publish route, execute route. Actually, you can do all actions except the design itself 😉 You can find commandline*.sh scripts directly in your Talend Studio installation, or you can launch the commandlin

Talend ESB Continous Integration, part1: Using Camel Test Kit

Image
Introduction In this serie of articles, I will show how to setup a Continuous Integration solution mixing Talend ESB tools, Maven, and Jenkins. The purpose is to decouple the design (performed in the studio), the tests (both unit and integration tests), and the deployment of the artifacts. The developers that use the studio should never directly upload to the Maven repository (Archiva in my case). I propose to implement the following steps: the developers use the studio to design their routes: the metadata (used to generate the code) are stored in the subversion. The studio “only” checkouts and commits on subversion: it never directly upload to the artifact repository. a continuous integration tool (Jenkins in my case) uses Maven. The Maven POM leverages the Talend commandline (a studio without the GUI) to checkout, generate the code, and publish to the artifact repository. The Maven POM is also used to execute unit tests, eventually integration tests, and cleanly cut off the releases.

Coming in Karaf 3.0.0: subshell and completion mode

If you are a Karaf user, you probably know that Karaf is very extensible: you can add features in Karaf to provide new functionalities. For instance, you can install Camel, ActiveMQ, CXF, Cellar, etc in your Karaf runtime. Most of these features provide new commands: – Camel provides camel:* commands to manipulate the Camel Context, the routes, etc. – CXF provides cxf:* commands to manipulate the CXF buses, endpoints, etc. – ActiveMQ provides activemq:* commands to manipulate brokers. – Cellar provides cluster:* commands to manipulate cluster nodes, cluster groups, etc. – and so on If you install some features like this, the number of commands available in the Karaf shell console is really impressive. And it’s not always easy to find the one that we need. That’s why subshell support has been introduced. Subshell Karaf now uses commands scope to create “on the fly” a subshell: the commands are grouped by subshell. As you will see later, depending of the completion mode that you will use

Coming in Karaf 3.0.0: JAAS users, groups, roles, and ACLs

This week I worked with David Booschaert. David proposed a patch for Karaf 3.0.0 to add the notion of groups and use ACL for JMX. He posted a blog entry about that: http://coderthoughts.blogspot.fr/2013/10/jmx-role-based-access-control-for-karaf.html . David’s blog is very detailed, mostly in term of implementation, the usage of the interceptor, etc. This blog is more about the pure end-user usage: how to configure group, JMX ACL, etc. JAAS users, groups, and roles Karaf uses JAAS for user authentication and authorisation. By default, it uses the PropertiesLoginModule, which use the etc/users.properties file to store the users. The etc/users.properties file has the following format: user=password,role For instance: karaf=karaf,admin that means we have an user karaf, with password karaf, and admin for role. Actually, the roles are not really used in Karaf: for instance, when you use ssh or JMX, Karaf checks the principal and credentials (basically the username and password) but it doesn

Apache ActiveMQ 5.7, 5.9 and Master-Slave

With my ActiveMQ friends (especially Dejan and Claus), I’m working on ActiveMQ 5.9 next release. Today, I focus on the HA with ActiveMQ, and especially Master-Slave configuration. Update of the documentation The first thing that I noticed is that the documentation is not really up to date. If you do a search on the ActiveMQ website about Master-Slave, you will probably find these two links: http://activemq.apache.org/kahadb-master-slave.html http://activemq.apache.org/shared-file-system-master-slave.html On the first link (about KahaDB), we can see a note “This is under review – and not currently supported”. It’s confusing for the users as this mechanism is the prefered one ! On the other hand, the second link should be flagged as deprecated as this mechanism is no more maintained. I sent a message on the dev mailing list to updated these pages. Lease Database Locker to avoid “dual masters” In my test cases, I used a JDBC database backend (MySQL) for HA (instead of using KahaDB). I hav