Improvements on the Apache Karaf scheduler
Apache Karaf has a scheduler, allowing you to periodically execute actions. It’s powered by Quartz and provide several convenient features.
You can easily install the scheduler with the scheduler
feature:
karaf@root> feature:install scheduler
Once you have installed the scheduler, you have new commands available: scheduler:schedule
, scheduler:list
, etc.
The Karaf scheduler also provides a whiteboard pattern to looking for Runnable
or Job
. It uses the service properties as configuration of the scheduling.
It’s what Karaf Decanter uses for the polled collector, like the JMX collector for instance: the Karaf scheduler execute the run()
method of the JMX collector every minute by default. You can “reschedule” a job using the scheduler:reschedule
command.
Scheduler improvements in coming Apache Karaf 4.2.3 release
New commands
Now, in addition of script, you can also schedule command execution.
The previous scheduler:schedule
command has been renamed to scheduler:schedule-script
(you can create an alias in etc/shell.init.script
for backward compatibility).
A new command has been introduced: scheduler:schedule-command
. This command allows you to schedule the execution of a shell command.
For instance, you can schedule the execution of the la
command every 30 seconds, 5 times:
karaf@root()> scheduler:schedule-command --period 30 --times 5 la
Support of storage, example using JDBC JobStore
An important fix/improvement has been done the scheduler. We introduced a new SchedulerStorage
to be able to store the job in a external storage. Of course, the default behavior is completely transparent: you can use a Quartz storage.
It’s what we will show in this blog post, using the Quartz JDBCJobStoreTX.
Database preparation
Before using the job store in the scheduler, we have to prepare a database.
For this example, I’m using a Derby database, but any JDBC database can be used.
Let’s start Derby NetworkServer:
derby$ bin/startNetworkServer
Now, using Derby ij
, we connect to the Derby server and create our database:
derby$ bin/ijij version 10.14ij> connect 'jdbc:derby://localhost:1527/scheduler;create=true';
The Derby scheduler
database is now created. We have to create the tables used by the Quartz JobStore. The quartz distribution provides the SQL script to create those tables. You can find the SQL script corresponding to your database in $QUARTZ/docs/dbTables
directory.
Especially, we can find tables_derby.sql
. Let’s execute this script on our Derby database:
ij> run '/path/to/quartz/docs/dbTables/tables_derby.sql';
That’s it: our database is now ready, we can use it in the Karaf scheduler.
Creating the datasource in Apache Karaf
To use our scheduler
database in the Karaf scheduler, we first create a datasource.
For that, we are using the jdbc
feature in Karaf:
karaf@root()> feature:install jdbc
Then, we install the derbyclient
JDBC provider:
karaf@root> feature:install pax-jdbc-derbyclient
As we will use the DataSource via a JNDI name in the Karaf Scheduler Quartz configuration, we also install the jndi
feature:
karaf@root()> feature:install jndi
We can now create the datasource directly using the jdbc:ds-create
command:
karaf@root()> jdbc:ds-create -dn derbyclient -url jdbc:derby://localhost:1527/scheduler scheduler
Our datasource is ready and available with a JNDI name:
karaf@root()> jdbc:ds-list Name │ Service Id │ Product │ Version │ URL │ Status──────────┼────────────┼──────────────┼───────────────────────┼───────────────────────────────────────┼───────scheduler │ 91 │ Apache Derby │ 10.14.2.0 - (1828579) │ jdbc:derby://localhost:1527/scheduler │ OKkaraf@root()> jndi:namesJNDI Name │ Class Name───────────────────────┼───────────────────────────────────────────────osgi:service/scheduler │ org.ops4j.pax.jdbc.impl.DriverDataSourceosgi:service/jndi │ org.apache.karaf.jndi.internal.JndiServiceImpl
Using the datasource in the Karaf scheduler
Now, we can install the scheduler
feature:
karaf@root()> feature:install scheduler
You can now configure the Karaf scheduler to use a JDBC job store with our datasource. For that, let’s edit etc/org.apache.karaf.scheduler.cfg
configuration file:
################################################################################## Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements. See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License. You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.##################################################################################============================================================================# Configure Karaf Scheduler Properties#============================================================================org.quartz.scheduler.instanceName=Karaforg.quartz.scheduler.instanceId=AUTO#============================================================================# Configure ThreadPool#============================================================================org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPoolorg.quartz.threadPool.threadCount=30org.quartz.threadPool.threadPriority=5#============================================================================# Configure DataSource#============================================================================org.quartz.dataSource.scheduler.jndiURL=osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=scheduler)#============================================================================# Configure JobStore#============================================================================org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTXorg.quartz.jobStore.dataSource=schedulerorg.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
The org.quartz.dataSource.scheduler.jndiURL
property contains the JNDI name for our datasource. Thanks to the jndi
Karaf feature, we can directly use osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=scheduler)
JNDI name.
Then, we setup a JDBC job store in the org.quartz.jobStore.class
property.
The org.quartz.jobStore.dataSource
property contains the datasource name defined earlier in the configuration file.
Finally, Quartz supports several JDBC dialects, depending of the database. It’s defined by the org.quartz.jobStore.driverDelegateClass
property. In our case, as we use a Derby database, you use the “generic”: org.quartz.impl.jdbcjobstore.StdJDBCDelegate
.
Nothing change around the usage of the scheduler (you use the same commands, the same whiteboard, …) but you can have several scheduler instances “synchronized” thanks to the shared store.
It’s especially interesting when you have a Karaf cluster or when your Karaf instances are running on the cloud.
Comments
Post a Comment