NatTable + Eclipse Collections = Performance & Memory improvements ?

Some time ago I got reports from NatTable users about high memory consumption when using NatTable with huge data sets. Especially when using trees, the row hide/show feature and/or the row grouping feature. Typically I tended to say that this is because of the huge data set in memory, not because of the NatTable implementation. But as a good open source developer I take such reports seriously and verified the statement to be sure. So I updated one of the NatTable examples that combine all three features to show about 2 million entries. Then I modified some row heights, collapsed tree nodes and hid some rows. After checking the memory consumption I was surprised. The diagram below shows the result. The heap usage goes up to and beyond 1.5 GB on scrolling. In between I performed a GC and scrolled again, which causes the those peaks and valleys.

A more detailed inspection reveals that the high memory consumption is not because of the data in memory itself. There are a lot of primitive wrapper objects and internal objects in the map implementation that consume a big portion of the memory, as you can see in the following image.

Note:
Primitive wrapper objects have a higher memory consumption than primitive values itself. As there are already good articles about that topic available I will not repeat that. If you are interested in some more details in the topic Primitives vs Objects you can have a look at Baeldung for example.

So I started to check the NatTable implementation in search of the memory issue. And I found some causes. In several places there are internal caches for the index-position mapping to improve the rendering performance. Also the row heights and column widths are stored internally in a collection if a user resized them. Additionally some scaling operations incorrectly where using Double objects instead of primitive values to avoid rounding issues on scaling.

From my experience in an Android project I remembered an article that described a similar issue. In short: Java has no collections for primitive types, therefore primitive values need to be stored via wrapper objects. In Android they introduced the SparseArray to deal with this issue. So I was searching for primitive collections in Java and found Eclipse Collections. To be honest, I heard about Eclipse Collections before, but I always thought the standard Java Collections are already good enough, so why checking some third party collections. Small spoiler: I was wrong!

Looking at the website of Eclipse Collections, they state that they have a better performance and better memory consumption than the standard Java Collections. But a good developer and architect does not simply trust statements like “take my library and all your problems are solved”. So I started my evaluation of Eclipse Collections to see if the memory and performance issues in NatTable can be solved by using them. Additionally I was looking at the Primitive Type Streams introduced with Java 8 to see if some issues can even be leveraged using that API.

Creation of test data

Right at the beginning of my evaluation I noticed the first issue. Which way should be used to create a huge collection of test data to process? I read about some discussions using the good old for-loop vs. IntStream. So I started with some basic performance measurements to compare those two. The goal was to create test data with values from 0 to 1.000.000 where every 100.000 entry is missing.

The for-loop for creating an int[] with the described values looks like this:

int[] values = new int[999_991];
int index = 0;
for (int i = 0; i < 1_000_000; i++) {
    if (i == 0 || i % 100_000 != 0) {
        values[index] = i;
        index++;
    }
}

Using the IntStream API it looks like this:

int[] values = IntStream.range(0, 1_000_000)
        .filter(i -> i == 0 || i % 100_000 != 0)
        .toArray();

Additionally I wanted to compare the performance for creating an ArrayList<Integer> via for-loop and IntStream.

ArrayList<Integer> values = new ArrayList<>(999_991);
for (int i = 0; i < 1_000_000; i++) {
    if (i == 0 || i % 100_000 != 0) {
        values.add(i);
    }
}
List<Integer> values = IntStream.range(0, 1_000_000)
        .filter(i -> (i == 0 || i % 100_000 != 0))
        .boxed()
        .collect(Collectors.toList());

The result is interesting, although not suprising. Using the for-loop for creating an int[] is the clear winner. The usage of the IntStream is not bad but definitely worse than the for-loop. So for recurring tasks and huge ranges a refactoring from for-loop to IntStream is not a good idea. The creation of collections with wrapper objects is of course even worse, as wrapper objects need to be created via boxing.

collecting int[] via for-loop 1 ms
collecting int[] via IntStream 4 ms
collecting List<Integer> via for-loop 7 ms
collecting List<Integer> via IntStream 13 ms

I also tested the usage of HashSet and TreeSet for the wrapper objects, as typically in NatTable I need distinct values, often sorted for further processing. HashSet as well as TreeSet have a worse performance in the creation scenario, but TreeSet is the clear looser here.

collecting HashSet<Integer> via for-loop 16 ms
collecting TreeSet<Integer> via for-loop 189 ms
collecting Set<Integer> via IntStream 26 ms 

Note:
Running the tests in a single execution, the numbers are worse, which is caused by the VM ramp up and class loading. Executing it 10 times the average numbers are similar to the above but are still worse because the first execution is that much worse. The numbers shown above are the average out of 100 executions. And even increasing the number of executions to 1.000 the average values are quite the same and sometimes even get drastically better because of the VM optimizations for code that gets executed often. So the numbers presented here are the average out of 100 executions.

After evaluating the performance of standard Java API for creating test data, I looked at the Eclipse Collections – Primitive Collections. I compared MutableIntList with MutableIntSet and used the different factory methods for creating the test data:

  • Iteration
    directly operate on an initial empty MutableIntList
    Note: it is not possible to specify an initial capacity

    MutableIntList values = IntLists.mutable.empty();
    for (int i = 0; i < 1_000_000; i++) {
        if (i == 0 || i % 100_000 != 0) {
            values.add(i);
        }
    }
  • Factory method of(int...) / with(int...)
    MutableIntList values = IntLists.mutable.of(inputArray);
  • Factory method ofAll(Iterable<Integer>) / withAll(Iterable<Integer>)
    MutableIntList values = IntLists.mutable.ofAll(inputCollection);
  • Factory method ofAll(IntStream) / withAll(IntStream)
    MutableIntList values = IntLists.mutable.ofAll(
        IntStream
            .range(0, 1_000_000)
            .filter(i -> (i == 0 || i % 100_000 != 0)));

To create MutableIntSet use the IntSetsutility class:

MutableIntSet values = IntSets.mutable.xxx

Note:
For the factory methods of course the generation of the input also needs to be taken into account. So for creating data from scratch the time for creating the array or the collection needs to be added on top.

The result shows that at creation time the MutableIntList is much faster than the MutableIntSet. And the usage of the factory method with an int[] parameter is faster than using an Integer collection or IntStream or the direct operation on the MutableIntList. The reason for this is probably that using an int[] the MutableIntList instances are actually wrapper to the int[]. In this case you alse need to be careful, as modifications done via the primitive collection are directly reflected outside of the collection.

creating MutableIntList via iteration 3 ms
creating MutableIntList of int[] 0 ms
creating MutableIntList via Integer collection 4 ms
creating MutableIntList via IntStream 6 ms

creating MutableIntSet via iteration 32 ms
creating MutableIntSet of int[] 32 ms
creating MutableIntSet of Integer collection 39 ms
creating MutableIntSet via IntStream 38 ms

In several use cases the usage of a Set would be nicer to directly avoid duplicates in the collection. In NatTable a sorted order is also needed often, but there is no TreeSet equivalent in the primitive collections. But the MutableIntList comes with some nice API to deal with this. Via distinct() we get a new MutableIntList that only contains distinct values, via sortThis() the MutableIntList is directly sorted.

The following call returns a new MutableIntList with distinct values in a sorted order, similar to a TreeSet.

MutableIntList uniqueSorted = values.distinct().sortThis();

When changing this in the test, the time for creating a MutableIntList with distinct values in a sorted order increases to about 27 ms. Still less than creating a MutableIntSet. But as our input array is already sorted and only contains distinct values, this measurment is probably not really meaningful.

The key takeaways in this part are:

  • The good old for-loop still has the best performance. It is also faster than IntStream.range().
  • The MutableIntList has a better performance at creation time compared to MutableIntSet. This is the same with default Java List and Set implementations.
  • The MutableIntList has some nice API for modifications compared to handling a primitive array, which makes it more comfortable to use.

Usage of primitive value collections

As already mentioned, Eclipse Collections come with nice and comfortable API similar to the Java Stream API. But here I don’t want to go in more detail on that API. Instead I want to see how Eclipse Collections perform when using the standard Java Collections API and compare it with the performance of the Java Collections. By doing this I want to ensure that by using Eclipse Collections the performance is getting better or at least is not becoming worse than by using the default Java collections.

contains()

The first use case is the check if a value is contained in a collection. This is done by the contains() method.

boolean found = valuesCollection.contains(search);

For the array we compare the old-school for-loop

boolean found = false;
for (int i : valuesArray) {
    if (i == search) {
        found = true;
        break;
    }
}

with the primitive streams approach

boolean found = Arrays.stream(valuesArray).anyMatch(x -> x == search);

Additionally I added a test for using Arrays.binarySearch(). But the result is not 100% comparable, as binarySearch() requires the array to be sorted in advance. Since our array already contains the test data in sorted order, this test works.

boolean found = Arrays.binarySearch(valuesArray, search) >= 0;

We use the collections/arrays that we created before and first check for the value 450.000 which exists in the middle of the collection. Below you find the execution times of the different approaches.

contains in List 1 ms
contains in Set 0 ms
contains in int[] stream 2 ms
contains in int[] iteration 1 ms
contains in int[] binary search 0 ms
contains in MutableIntList 0 ms
contains in MutableIntSet 0 ms

Then we execute the same setup and check for the value 2.000.000 which does not exist in the collection. This way the whole collection/array needs to be processed, while in the above case the search stops once the value is found.

contains in List 2 ms
contains in Set 0 ms
contains in int[] stream 2 ms
contains in int[] iteration 1 ms
contains in int[] binary search 0 ms

contains in MutableIntList 0 ms
contains in MutableIntSet 0 ms

What we can see here is that the Java Primitive Streams have the worst performance for the contains() case and the Eclipse Collections perform best. But actually there is not much difference in the performance.

indexOf()

For people with a good knowledge of the Java Collections API the specific measurement of indexOf() might look strange. This is because for example the ArrayList internally uses indexOf() in the contains() implementation. And we have tested that before. But the Eclipse Primitive Collections are not using indexOf() in contains(). They operate on the internal array. Also indexOf() is implemented differently without the use of the equals() method. So a dedicated verification is useful. Below are the results for testing an existing value and a not existing value.

Check indexOf() 450_000
indexOf in collection 0 ms
indexOf in int[] iteration 0 ms
indexOf in MutableIntList 0 ms

Check indexOf() 2_000_000
indexOf in collection 1 ms
indexOf in int[] iteration 0 ms
indexOf in MutableIntList 0 ms

The results are actually not surprising. Also in this case there is not much difference in the performance.

Note:
There is no indexOf() for Sets and of course we can also not get an index when using Java Primitive Streams. So this test only compares ArrayList, iteration on an int[] and the MutableIntList. I also skipped testing binarySearch() here, as the results would be equal to the contains() test with the same restrictions.

removeAll()

Removing multiple items from a List is a big performance issue. Before my investigation here I was not aware on how serious this issue is. What I already knew from past optimizations is, that removeAll() on an ArrayList is much worse than iterating manually over the items to remove and then remove every item solely.

For the test I am creating the base collection with 1.000.000 entries and a collection with the values from 200.000 to 299.999 that should be removed. First I execute the iteration to remove every item solely

for (Integer r : toRemoveList) {
    valueCollection.remove(r);
}

then I execute the test with removeAll()

valueCollection.removeAll(toRemoveList);

The tests are executed on an ArrayList, a HashSet, a MutableIntList and a MutableIntSet.

Additionally I added a test that uses the Primitive Stream API to filter and create a new array from the result. As this is not a modification of the original collection, the result is not 100% comparable to the other executions. But anyhow maybe interesting to see (even with a dependency to binarySearch()).

int[] result = Arrays.stream(values)
    .filter(v -> (Arrays.binarySearch(toRemove, v) < 0))
    .toArray();

Note:
The code for removing items from an array is not very comfortable. Of course we could also use some library like Apache Commons with primitive type arrays. But this is about comparing plain Java Collections with Eclipse Collections. Therefore I decided to skip this.

Below are the execution results:

remove all by primitive stream 21 ms
remove all by iteration List 29045 ms

remove all List 64068 ms
remove all by iteration Set 1 ms
remove all Set 1 ms
remove all by iteration MutableIntList 13602 ms
remove all MutableIntList 21 ms
remove all by iteration MutableIntSet 2 ms
remove all MutableIntSet 2 ms

You can see that the iteration approach on an ArrayList is almost twice as fast as using removeAll(). But still the performance is very bad. The performance for removeAll() as well as the iteration approach on a Set and a MutableIntSet are very good. Interestingly the call to removeAll() on a MutableIntList is also acceptable, while the iteration approach seems to have a performance issue.

The key takeaways in this part are:

  • The performance of the Eclipse Collections is at least as good as the standard Java Collections. In several cases even far better.
  • Some performance workarounds that were introduced with standard Java Collections could avoid the performance improvements if they are simply adapted with Eclipse Collections and not also changed.

Memory consumption

From the above measurements and observations I can say that in most cases there is a performance improvement when using Eclipse Collections compared to the standard Java Collections. And even for use cases where no big improvement can be seen, there is a small improvement or at least no performance decrease. So I decided to integrate Eclipse Collections in NatTable and use the Primitive Collections in every place where primitive values where stored in Java Collections. Additionally I fixed all places where wrapper objects were created unnecessarily. Then I executed the example from the beginning again to measure the memory consumption. And I was really impressed!

As you can see in the above graph, the heap usage stays below 250 MB even on scrolling. Remember, before using Eclipse Primitive Collections, the heap usage growed up to 1,5 GB. Going into more detail we can see that a lot of objects that were created for internal management are not created anymore. So now really the data model that should be visualized by NatTable is taking most of memory, not the NatTable itself anymore.

One thing I noticed in the tests is that there is still quite some memory allocated if the MutableIntList or MutableIntSet are cleared via clear(). Basically it is the same with the Java Collections. The collection allocates the space for the needed size. For the Eclipse Collections this means the internal array keeps its size as it only fills the array with 0. To even clean up this memory you need to assign a new empty collection instance.

Note:
The concrete IntArrayList class contains a trimToSize() method. But as you typically work agains the interfaces when using the factories, that method is not accessible, and also not all implementations contain such a method.

Conclusion

Being sceptic at the beginning and have to admit that Eclipse Collections are really interesting and useful when it comes to performance and memory usage optimizations with collections in Java. The API is really handy and similar to the Java Streams API, which makes the usage quite comfortable.

My takeaways after the verification:

  • For short living collections it is often better to either use primitive type arrays, primitive streams or the MutableIntList, which has the better performance at creation compared to the MutableIntSet.
  • For storing primitive values use MutableIntSet or MutableIntList. This gives a similar memory consumption than using primitive type arrays, by granting a rich API for modifications at runtime.
  • Make use of the Eclipse Collections API to make implementation and  processing as efficient as possible.
  • When migrating from Java Collections API to Eclipse Collections, ensure that no workarounds are in the current code base. Otherwise you might loose big performance improvements.
  • Even when using a library like Eclipse Collections you need to take care about your memory management to avoid leaks at runtime, e.g. create new instance in favour of clearing huge collections.

Based on the observations above I decided that Eclipse Collections will become a major dependency for NatTable Core. With NatTable 2.0 it will be part of the NatTable Core Feature. I am sure that internally even more optimizations are possible by using Eclipse Collections. And I will investigate where and how this can be done. So you can expect even more improvements in that area in the future.

In case you think my tests are incorrect or need to be improved, or you simply want to verify my statements, here are the links to the classes I used for my verification:

In the example class I increased the number of data rows to about 2.000.000 via this code:

List<Person> personsWithAddress = PersonService.getFixedPersons();
for (int i = 1; i < 100_000; i++) {
    personsWithAddress.addAll(PersonService.getFixedPersons());
}

and I increased the row groups via these two lines of code:

rowGroupHeaderLayer.addGroup("Flanders", 0, 8 * 100_000);
rowGroupHeaderLayer.addGroup("Simpsons", 8 * 100_000, 10 * 100_000);

If some of my observations are wrong or the code can be made even better, please let me know! I am always willing to learn!

Thanks to the Eclipse Collections team for this library!

If you are interested in learning more about Eclipse Collections, you might want to check out the Eclipse Collections Kata.

Posted in Dirk Fauth, Eclipse, Java | Tagged , , | 1 Comment

NatTable – dynamic scaling enhancements

The last weeks I worked on harmonizing the scaling capabilities of NatTable. The first goal was to provide scaled versions of all internal NatTable images. This caused an update of several NatTable images like the checkbox, that you will notice in the next major release. To test the changes I implemented a basic dynamic scaling, which by accident and some additional modification became the new zoom feature in NatTable. I will give a short introduction to the new feature here, so early adaptors have a chance to test it in different scenarios before the next major release is published.

To enable the UI bindings for dynamic scaling / zooming the newly introduced ScalingUiBindingConfiguration needs to be added to the NatTable.

natTable.addConfiguration(
    new ScalingUiBindingConfiguration(natTable));

This will add a MouseWheelListener and some key bindings to zoom in/out:

  • CTRL + mousewheel up = zoom in
  • CTRL + mousewheel down = zoom out
  • CTRL + ‘+’ = zoom in
  • CTRL + ‘-‘ = zoom out
  • CTRL + ‘0’ = reset zoom

The dynamic scaling can be triggered programmatically by executing the ConfigureScalingCommand on the NatTable instance. This command already exists for quite a while, but it was mainly used internally to align the NatTable scaling with the display scaling. I have introduced new default IDpiConverter to make it easier to trigger dynamic scaling:

  • DefaultHorizontalDpiConverter
    Provides the horizontal dots per inch of the default display.
  • DefaultVerticalDpiConverter
    Provides the vertical dots per inch of the default display.
  • FixedScalingDpiConverter
    Can be created with a DPI value to set a custom scaling.

At initialization time, NatTable internally fires a ConfigureScalingCommand with the default IDpiConverter to align the scaling with the display settings.

As long as only text is included in the table, registering the ScalingUiBindingConfigurationis all you have to do. Once ICellPainter are used that render images, some additional work has to be done. The reason for this is that for performance and memory reasons the images are referenced in the painter and not requested for every rendering operation. As painters are not part of the event handling, they can not be simply updated. Also for several reasons there are mechanisms that avoid applying the registered configurations multiple times.

There are three ways to style a NatTable, and as of now this requires three different ways to handle dynamic scaling updates for image painters.

  1. AbstractRegistryConfiguration
    This is the default way that exists for a long time. Most of the default configurations provide the styling configuration this way. As there is no way to identify which configuration registers ICellPainter and how the instances are created, the ScalingUiBindingConfiguration needs to be initialized with an updater that knows which steps to perform.

    natTable.addConfiguration(
      new ScalingUiBindingConfiguration(natTable, configRegistry -> {
    
        // we need to re-create the CheckBoxPainter
        // to reflect the scaling factor on the checkboxes
        configRegistry.registerConfigAttribute(
            CellConfigAttributes.CELL_PAINTER,
            new CheckBoxPainter(),
            DisplayMode.NORMAL,
            "MARRIED");
    
      }));
  2. Theme styling
    In a ThemeConfiguration the styling options for a NatTable are collected in one place. In the previous state the ICellPainter instance creation was done on the member initialization which was quite static. Therefore the ICellPainter instance creation was moved to a new method named createPainterInstances(), so the painter update on scaling can be performed without any additional effort. For custom painter configurations this means that they should be added to a theme via IThemeExtension.

    natTable.addConfiguration(
        new ScalingUiBindingConfiguration(natTable));
    
    // additional configurations
    
    natTable.configure();
    
    ...
    
    IThemeExtension customThemeExtension = new IThemeExtension() {
    
        @Override
        public void registerStyles(IConfigRegistry configRegistry) {
            configRegistry.registerConfigAttribute(
                CellConfigAttributes.CELL_PAINTER,
                new CheckBoxPainter(),
                DisplayMode.NORMAL,
                "MARRIED");
        }
    
        @Override
        public void unregisterStyles(IConfigRegistry configRegistry) {
            configRegistry.unregisterConfigAttribute(
                CellConfigAttributes.CELL_PAINTER,
                DisplayMode.NORMAL,
                "MARRIED");
        }
    };
    
    ThemeConfiguration modernTheme = 
        new ModernNatTableThemeConfiguration();
    modernTheme.addThemeExtension(customThemeExtension);
    
    natTable.setTheme(modernTheme);
  3. CSS styling
    The CSS styling support in NatTable already manages the painter instance creation. The only thing to do here is to register a command handler that triggers the CSS apply operation actively. Otherwise the images will scale only on interactions with the UI.

    natTable.registerCommandHandler(
        new CSSConfigureScalingCommandHandler(natTable));

I have tested several scenarios, and the current state of development looks quite good. But of course I am not sure if I tested everything and found every possible edge case. Therefore it would be nice to get some feedback from early adopters if the new zoom feature is stable or not. The p2 update site with the current development snapshot can be found on the NatTable SNAPSHOTS page. From build number 900 on the feature is included. Any issues found can be reported on the corresponding Bugzilla ticket 560802.

Please also note that with the newly introduced zooming capability I have dropped the ZoomLayer. It did only increase the cell dimensions but not the font or the images. Therefore it was not functional (maybe never finished) IMHO and to avoid confusions in the future I have deleted it now.

Posted in Dirk Fauth, Eclipse, Java | Tagged , | Comments Off on NatTable – dynamic scaling enhancements

Building a “headless RCP” application with Tycho

Recently I got the request to create a “headless RCP” application from an existing Eclipse project. I was reading several posts on that and saw that a lot of people using the term “headless RCP”. First of all I have to say that “headless RCP” is a contradiction in itself. RCP means Rich Client Platform. And a rich client is typically characterized by having a graphical user interface. A headless application means to have an application with a command line interface. So the characteristic here is to have no graphical user interface. When people are talking about a “headless RCP” application, they mean to create a command line application based on code that is created for a RCP application, but without the GUI. And that actually means they want to create an OSGi application based on Equinox.

For such a scenario I typically would recommend to use bndtools or at least plain Java with the bnd Maven plugins. But there are scenarios where this is not possible, e.g. if your whole project is an Eclipse RCP project which currently forces you to use PDE tooling, and you only want to extract some parts/services to a command line tool. Well, one could also suggest to separate those parts to a separate workspace where bndtools is used and consume those parts in the RCP workspace. But that increases the complexity in the development environment, as you need to deal with two different toolings for one project.

In this blog post I will explain how to create a headless product out of an Eclipse RCP project (PDE based) and how to build it automatically with Tycho. And I will also show a nice benefit provided by the bnd Maven plugins on top of it.

Let’s start with the basics. A headless application provides functionality via command line. In an OSGi application that means to have some services that can be triggered on the command line. If your functionality is based on Eclipse Extension Points, I suggest to convert them to OSGi Declarative Services. This has several benefits, one of them is that the creation of a headless application is much easier. That said this tutorial is based on using OSGi Declarative Services. If you are not yet familiar with that, give my Getting Started with OSGi Declarative Services a try. I will use the basic bundles from the PDE variant for the headless product here.

Product Definition

For the automated product build with Tycho we need a product definition. Of course with some special configuration parameters as we actually do not have a product in Eclipse RCP terms.

  • Create the product project
    • Main Menu → File → New → Project → General → Project
    • Set name to org.fipro.headless.product
    • Ensure that the project is created in the same location as the other projects.
    • Click Finish
  • Create a new product configuration
    • Right click on project → New → Product Configuration
    • Set the filename to org.fipro.headless.product
    • Select Create configuration file with basic settings
    • Click Finish
  • Configure the product
    • Overview tab
      • ID = org.fipro.headless
      • Version = 1.0.0.qualifier
      • Uncheck The product includes native launcher artifacts
      • Leave Product and Application empty
        Product and Application are used in RCP products, and therefore not needed for a headless OSGi command line application.
      • This product configuration is based on: plug-ins
        Note:
        You can also create a product configuration that is based on features. For simplicity we use the simple plug-ins variant.
    • Contents tab
      • Add the following bundles/plug-ins:
      • Custom functionality
        • org.fipro.inverter.api
        • org.fipro.inverter.command
        • org.fipro.inverter.provider
      • OSGi console
        • org.apache.felix.gogo.command
        • org.apache.felix.gogo.runtime
        • org.apache.felix.gogo.shell
        • org.eclipse.equinox.console
      • Equinox OSGi Framework with Felix SCR for Declarative Services support
        • org.eclipse.osgi
        • org.eclipse.osgi.services
        • org.eclipse.osgi.util
        • org.apache.felix.scr
    • Configuration tab
      • Start Levels
        • org.apache.felix.scr, StartLevel = 0, Auto-Start = true
          This is necessary because Equinox has the policy to not automatically activate any bundle. Bundles are only activated if a class is directly requested from it. But the Service Component Runtime is never required directly, so without that setting, org.apache.felix.scr will never get activated.
      • Properties
        • eclipse.ignoreApp = true
          Tells Equinox to to skip trying to start an Eclipse application.
        • osgi.noShutdown = true
          The OSGi framework will not be shut down after the Eclipse application has ended. You can find further information about these properties in the Equinox Framework QuickStart Guide and the Eclipse Platform Help.

Note:
If you want to launch the application from within the IDE via the Overview tab → Launch an Eclipse application, you need to provide the parameters as launching arguments instead of configuration properties. But running a command line application from within the IDE doesn’t make much sense. Either you need to pass the same command line parameter to process, or activate the OSGi console to be able to interact with the application. This should not be part of the final build result. But to verify the setup in advance you can add the following to the Launching tab:

  • Program Arguments
    • -console
  • VM Arguments
    • -Declipse.ignoreApp=true -Dosgi.noShutdown=true

When adding the parameters in the Launching tab instead of the Configuration tab, the configurations are added to the eclipse.ini in the root folder, not to the config.ini in the configuration folder. When starting the application via jar, the eclipse.ini in the root folder is not inspected.

Tycho build

To build the product with Tycho, you don’t need any specific configuration. You simply build it by using the tycho-p2-repository-plugin and the tycho-p2-director-plugin, like you do with an Eclipse product. This is for example explained here.

Create a pom.xml in org.fipro.headless.app.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>org.fipro</groupId>
    <artifactId>org.fipro.parent</artifactId>
    <version>1.0.0-SNAPSHOT</version>
  </parent>

  <groupId>org.fipro</groupId>
  <artifactId>org.fipro.headless</artifactId>
  <packaging>eclipse-repository</packaging>
  <version>1.0.0-SNAPSHOT</version>

  <build>
    <plugins>
      <plugin>
        <groupId>org.eclipse.tycho</groupId>
        <artifactId>tycho-p2-repository-plugin</artifactId>
        <version>${tycho.version}</version>
        <configuration>
          <includeAllDependencies>true</includeAllDependencies>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.eclipse.tycho</groupId>
        <artifactId>tycho-p2-director-plugin</artifactId>
        <version>${tycho.version}</version>
        <executions>
          <execution>
            <id>materialize-products</id>
            <goals>
              <goal>materialize-products</goal>
            </goals>
          </execution>
          <execution>
            <id>archive-products</id>
            <goals>
              <goal>archive-products</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>

For more information about building with Tycho, have a look at the vogella Tycho tutorial.

Running the build via mvn clean verify should create the resulting product in the folder org.fipro.headless/target/products. The archive file org.fipro.headless-1.0.0-SNAPSHOT.zip contains the product artifacts and the p2 related artifacts created by the build process. For the headless application only the folders configuration and plugins are relevant, where configuration contains the config.ini with the necessary configuration attributes, and in the plugins folder you find all bundles that are part of the product.

Since we did not add a native launcher, the application can be started with the java command. Additionally we need to open the OSGi console, as we have no starter yet. From the parent folder above configuration and plugins execute the following command to start the application with a console (update the filename of org.eclipse.osgi bundle as this changes between Eclipse versions):

java -jar plugins/org.eclipse.osgi_3.15.100.v20191114-1701.jar -configuration ./configuration -console

The -configuration parameter tells the framework where it should look for the config.ini, the -console parameter opens the OSGi console.

You can now interact with the OSGi console and even start the “invert” command implemented in the Getting Started tutorial.

Native launcher

While the variant without a native launcher is better exchangeable between operating systems, it is not very comfortable to start from a users perspective. Of course you can also add a batch file for simplification, but Equinox also provides native launchers. So we will add native launchers to our product. This is fairly easy because you only need to check The product includes native launcher artifacts on the Overview tab of the product file and execute the build again.

The resulting product now also contains the following files:

  • eclipse.exe
    Eclipse executable.
  • eclipse.ini
    Configuration pointing to the launcher artifacts.
  • eclipsec.exe
    Console optimized executable.
  • org.eclipse.equinox.launcher artifacts in the plugins directory
    Native launcher artifacts.

You can find some more information on those files in the FAQ.

To start the application you can use the added executables.

eclipse.exe -console

or

eclipsec.exe -console

The main difference in first place is that eclipse.exe operates in a new shell, while eclipsec.exe stays in the same shell when opening the OSGi console. The FAQ says “On Windows, the eclipsec.exe console executable can be used for improved command line behavior.”.

Note:
You can change the name of the eclipse.exe file in the product configuration on the Launching tab by setting a Launcher Name. But this will not affect the eclipsec.exe.

Command line parameter

Starting a command line tool with an interactive OSGi console is typically not what people want. This is nice for debugging purposes, but not for productive use. In productive use you usually want to use some parameters on the command line and then process the inputs. In plain Java you take the arguments from the main() method and process them. But in an OSGi application you do not write a main() method. The framework launcher has the main() method. To start your application directly you therefore need to create some kind of starter that can inspect the launch arguments.

With OSGi Declarative Services the starter is an immediate component. That is a component that gets activated directly once all references are satisfied. To be able to inspect the command line parameters in an OSGi application, you need to know how the launcher that started it provides this information. The Equinox launcher for example provides this information via org.eclipse.osgi.service.environment.EnvironmentInfo which is provided as a service. That means you can add a @Reference for EnvironmentInfo in your declarative service, and once it is available the immediate component gets activated and the application starts.

Create new project org.fipro.headless.app

  • Create the app project
    • Main Menu → File → New → Plug-in Project
    • Set name to org.fipro.headless.app
  • Create a package via right-click on src
    • Set name to org.fipro.headless.app
  • Open the MANIFEST.MF file
    • Add the following to Imported Packages
      • org.osgi.service.component.annotations
        Remember to mark it as optional to avoid runtime dependencies to the annotations.
      • org.eclipse.osgi.service.environment
        To be able to consume the Equinox EnvironmentInfo.
      • org.fipro.inverter
        To be able to consume the functional services.
  • Add org.fipro.headless.app to the Contents of the product definition.
  • Add org.fipro.headless.app to the modules section of the pom.xml.

Create an immediate component with the name EquinoxStarter.

@Component(immediate = true)
public class EquinoxStarter {

    @Reference
    EnvironmentInfo environmentInfo;

    @Reference
    StringInverter inverter;

    @Activate
    void activate() {
        for (String arg : this.environmentInfo.getNonFrameworkArgs()) {
            System.out.println(inverter.invert(arg));
        }
    }
}

With the simple version above you will notice some issues if you are not specifying the -console parameter:

  1. If you start the application via eclipse.exe with an additional parameter, the code will be executed, but you will not see any output.
  2. If you start the application via eclipsec.exe with an additional parameter, you will see an output but the application will not finish.

If you pass the -console parameter, the output will be seen in both cases and the OSGi console opens immediately afterwards.

First let’s have a look why the application seem to hang when started via eclipsec.exe. The reason is simply that we configured osgi.noShutdown=true, which means the OSGi framework will not be shut down after the Eclipse application has ended. So the simple solution would be to specify osgi.noShutdown=false. The downside is that now using the -console parameter will not keep the OSGi console open, but close the application immediately. Also using eclipse.exe with the -console parameter will not keep the OSGi console open. So the configuration parameter osgi.noShutdown should be set dependent on whether an interactive mode via OSGi console should be supported or not.

If both variants should be supported osgi.noShutdown should be set to true and a check for the -console parameter in code needs to be added. If that parameter is not set, close the application via System.exit(0);.

-console is an Equinox framework parameter, so the check and the handling looks like this:

boolean isInteractive = Arrays
    .stream(environmentInfo.getFrameworkArgs())
    .anyMatch(arg -> "-console".equals(arg));

if (!isInteractive) {
    System.exit(0);
}

With the additional handling above, the application will stay open with an active OSGi console if -console is set, and it will close immediately if -console is not set.

The other issue we faced was that we did not see any output when using eclipse.exe. The reason is that the outputs are not sent to the executing command shell. And without specifying an additional parameter, the used command shell is not even opened. One option to handle this is to open the command shell and keep it open as long as a user input closes it again. The framework parameter is -consoleLog. And the check could be as simple as the following for example:

boolean showConsoleLog = Arrays
    .stream(environmentInfo.getFrameworkArgs())
    .anyMatch(arg -> "-consoleLog".equals(arg));

if (showConsoleLog) {
    System.out.println();
    System.out.println("***** Press Enter to exit *****");
    // just wait for a Enter
    try (BufferedReader reader = new BufferedReader(new InputStreamReader(System.in))) {
        reader.readLine();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

With the -consoleLog handling, the following call will open a new shell that shows the result and waits for the user to press ENTER to close the shell and finish the application.

eclipse.exe test -consoleLog

bnd export

Although these results are already pretty nice, it can be even better. With bnd you are able to create a single executable jar that starts the OSGi application. This makes it easier to distribute the command line application. And the call of the application is similar easy compared with the native executable, while there is no native stuff inside and therefore it is easy exchangeable between operating systems.

Using the bnd-export-maven-plugin you can achieve the same result even with a PDE-Tycho based build. But of course you need to prepare things to make it work.

The first thing to know is that the bnd-export-maven-plugin needs a bndrun file as input. So now create a file headless.bndrun in org.fipro.headless.product project that looks similar to this:

-runee: JavaSE-1.8
-runfw: org.eclipse.osgi
-runsystemcapabilities: ${native_capability}

-resolve.effective: active;skip:="osgi.service"

-runrequires: \
osgi.identity;filter:='(osgi.identity=org.fipro.headless.app)'

-runbundles: \
org.fipro.inverter.api,\
org.fipro.inverter.command,\
org.fipro.inverter.provider,\
org.fipro.headless.app,\
org.apache.felix.gogo.command,\
org.apache.felix.gogo.runtime,\
org.apache.felix.gogo.shell,\
org.eclipse.equinox.console,\
org.eclipse.osgi.services,\
org.eclipse.osgi.util,\
org.apache.felix.scr

-runproperties: \
osgi.console=
  • As we want our Eclipse Equinox based application to be bundled as a single executable jar, we specify Equinox as our OSGi framework via -runfw: org.eclipse.osgi.
  • Via -runbundles we specify the bundles that should be added to the runtime.
  • The settings below -runproperties are needed to handle the Equinox OSGi console correctly.

Unfortunately there is no automatic way to transform a PDE product definition to a bndrun file, at least I am not aware of it. And yes there is some duplication involved here, but compared to the result it is acceptable IMHO. Anyhow, with some experience in scripting it should be easy to automatically create the bndrun file out of the product definition at build time.

Now enable the bnd-export-maven-plugin for the product build in the pom.xml of org.fipro.headless.product. Note that even with a pomless build it is possible to specify a specific pom.xml in a project if something additionally to the default build is needed (which is the case here).

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-export-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <failOnChanges>false</failOnChanges>
    <bndruns>
      <bndrun>headless.bndrun</bndrun>
    </bndruns>
    <bundles>
      <include>${project.build.directory}/repository/plugins/*</include>
    </bundles>
  </configuration>
  <executions>
    <execution>
      <goals>
        <goal>export</goal>
      </goals>
    </execution>
  </executions>
</plugin>

The bndruns configuration property points to the headless.bndrun we created before. In the bundles configuration property we point to the build result of the tycho-p2-repository-plugin to build up the implicit repository. This way we are sure that all required bundles are available without the need to specify any additional repository.

After a new build you will find the file headless.jar in org.fipro.headless.product/target. You can start the command line application via

java -jar headless.jar

You will notice that the OSGi console is started, anyhow which parameters are added to the command line. And all the command line parameters are not evaluated, because not the Equinox launcher started the application. Instead the bnd launcher started it. Therefore the EnvironmentInfo is not initialized correctly.

Unfortunately Equinox will anyhow publish the EnvironmentInfo as a service even if it is not initialized. Therefore the EquinoxStarter will be satisfied and activated. But we will get a NullPointerException (that is silently catched) when it is tried to access the framework and/or non-framework args. For good coding standards the EquinoxStarter needs to check if EnvironmentInfo is correctly initialized, otherwise it should do nothing. The code could look similar to this snippet:

@Component(immediate = true)
public class EquinoxStarter {

  @Reference
  EnvironmentInfo environmentInfo;

  @Reference
  StringInverter inverter;

  @Activate
  void activate() {
    if (environmentInfo.getFrameworkArgs() != null
      && environmentInfo.getNonFrameworkArgs() != null) {

      // check if -console was provided as argument
      boolean isInteractive = Arrays
        .stream(environmentInfo.getFrameworkArgs())
        .anyMatch(arg -> "-console".equals(arg));
      // check if -console was provided as argument
      boolean showConsoleLog = Arrays
        .stream(environmentInfo.getFrameworkArgs())
        .anyMatch(arg -> "-consoleLog".equals(arg));

      for (String arg : this.environmentInfo.getNonFrameworkArgs()) {
        System.out.println(inverter.invert(arg));
      }

      // If the -consoleLog parameter is used, a separate shell is opened. 
      // To avoid that it is closed immediately a simple input is requested to
      // close, so a user can inspect the outputs.
      if (showConsoleLog) {
        System.out.println();
        System.out.println("***** Press Enter to exit *****");
        // just wait for a Enter
        try (BufferedReader reader = new BufferedReader(new InputStreamReader(System.in))) {
          reader.readLine();
        } catch (IOException e) {
          e.printStackTrace();
        }
      }

      if (!isInteractive) {
        // shutdown the application if no console was opened
        // only needed if osgi.noShutdown=true is configured
        System.exit(0);
      }
    }
  }
}

This way we avoid that the EquinoxStarter is executing any code. So despite component instance creation and destruction, nothing happens.

To handle launching via bnd launcher, we need another starter. We create a new immediate component named BndStarter.

@Component(immediate = true)
public class BndStarter {
    ...
}

The bnd launcher provides the command line parameters in a different way. Instead of EnvironmentInfo you need to get the aQute.launcher.Launcher injected with its service properties. Inside the service properties map, there is an entry for launcher.arguments whose value is a String[]. To avoid the dependency to aQute classes in our code, we reference Object and use a target filter for launcher.arguments which works fine as Launcher is published also as Object to the ServiceRegistry.

String[] launcherArgs;

@Reference(target = "(launcher.arguments=*)")
void setLauncherArguments(Object object, Map<String, Object> map) {
    this.launcherArgs = (String[]) map.get("launcher.arguments");
}

Although not necessary, we add some code to align the behavior when started via bnd launcher with the behavior when started with the Equinox launcher. That means we check for the -console parameter and stop the application if that parameter is missing. The check for -consoleLog would also not be needed, as the bnd launcher stays in the same command shell like eclipsec.exe, but for processing we also remove it. Just in case someone tries it out.

The complete code of BndStarter would then look like this:

@Component(immediate = true)
public class BndStarter {

  String[] launcherArgs;

  @Reference(target = "(launcher.arguments=*)")
  void setLauncherArguments(Object object, Map<String, Object> map) {
    this.launcherArgs = (String[]) map.get("launcher.arguments");
  }

  @Reference
  StringInverter inverter;

  @Activate
  void activate() {
    boolean isInteractive = Arrays
      .stream(launcherArgs)
      .anyMatch(arg -> "-console".equals(arg));

    // clear launcher arguments from possible framework parameter
    String[] args = Arrays
      .stream(launcherArgs)
      .filter(arg -> !"-console".equals(arg) && !"-consoleLog".equals(arg))
      .toArray(String[]::new);

    for (String arg : args) {
      System.out.println(inverter.invert(arg));
    }

    if (!isInteractive) {
      // shutdown the application if no console was opened
      // only needed if osgi.noShutdown=true is configured
      System.exit(0);
    }
  }
}

After building again, the application will directly close without the -console parameter. And if -console is used, the OSGi console stays open.

The above handling was simply done to have something similar to the Eclipse product build. As the Equinox launcher does not automatically start all bundles the -console parameter triggers a process to start the necessary Gogo Shell bundles. The bnd launcher on the other hand always starts all installed bundles. The OSGi console always comes up and can be seen in the command shell even before the BndStarter kills it. If that behavior does no satisfy your needs, you could also easily build two application variants: one with a console and one without. You simply need to create another bndrun file that does not contain the console bundles and no console configuration properties.

-runee: JavaSE-1.8
-runfw: org.eclipse.osgi
-runsystemcapabilities: ${native_capability}

-resolve.effective: active;skip:="osgi.service"

-runrequires: \
    osgi.identity;filter:='(osgi.identity=org.fipro.headless.app)'

-runbundles: \
    org.fipro.inverter.api,\
    org.fipro.inverter.provider,\
    org.fipro.headless.app,\
    org.eclipse.osgi.services,\
    org.eclipse.osgi.util,\
    org.apache.felix.scr

If you add that additional bndrun file to the bndruns section of the bnd-export-maven-plugin the build will create two exports.

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-export-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <failOnChanges>false</failOnChanges>
    <bndruns>
      <bndrun>headless.bndrun</bndrun>
      <bndrun>headless_console.bndrun</bndrun> 
    </bndruns>
    <bundles>
      <include>target/repository/plugins/*</include>
    </bundles>
  </configuration>
  <executions>
    <execution>
      <goals>
        <goal>export</goal>
      </goals>
    </execution>
  </executions>
</plugin>

To check if the application should be stopped or not, you then need to check for the system property osgi.console.

boolean hasConsole = System.getProperty("osgi.console") != null;

If a console is configured to not stop the application. If there is no configuration for osgi.console call System.exit(0).

This tutorial showed a pretty simple example to explain the basic concepts on how to build a command line application from an Eclipse project. A real-world example can be seen in the APP4MC Model Migration addon, where the above approach is used to create a standalone model migration command line tool. This tool can be used in other environments like in build servers for example, while the integration in the Eclipse IDE remains in the same project structure.

The sources of this tutorial are available on GitHub.

If you are interested in finding out more about the Maven plugins from bnd you might want to watch this talk from EclipseCon Europe 2019. As you can see they are helpful in several situations when building OSGi applications.

Update: configurable console with bnd launcher

I tried to make the executable jar behavior similar to the Equinox one. That means, I wanted to create an application where I am able to configure via command line parameter if the console should be activated or not. Achieving this took me quite a while, as I needed to find out what causes the console to start with Equinox or not. The important thing is that the property osgi.console needs to be set to an empty String. The value is actually the port to connect to, and with that value set to an empty String, the current shell is used. In the bndrun files this property is set via -runproperties. If you remove it from the bndrun file, the console actually never starts, even if passed as system property on the command line.

Section 19.4.6 in Launching | bnd explains why. It simply says that you are able to override a launcher property via system property. But you can not add a launcher property via system property. Knowing this I solved the issue by setting the osgi.console property to an invalid value in the -runproperties section.

-runproperties: \
    osgi.console=xxx

This way the application can be started with or without a console, dependent on whether osgi.console is provided as system parameter via command line or not.

Of course the check for the -console parameter should be removed from the BndStarter to avoid that users need to provide both arguments to open a console!

I added the headless_configurable.bndrun file to the repository to show this:

Launch without console:

java -jar headless_configurable.jar Test

Launch with console:

java -jar -Dosgi.console= headless_configurable.jar

Update: bnd-indexer-maven-plugin

I got this pull request that showed an interesting extension to my approach. It uses the bnd-indexer-maven-plugin to create an index that can then be used in the bndrun files to make it editable with bndtools.

<plugin>
  <groupId>biz.aQute.bnd</groupId>
  <artifactId>bnd-indexer-maven-plugin</artifactId>
  <version>4.3.1</version>
  <configuration>
    <inputDir>${project.build.directory}/repository/plugins/</inputDir>
  </configuration>
  <executions>
    <execution>
      <phase>package</phase>
      <id>index</id>
      <goals>
        <goal>local-index</goal>
      </goals>
    </execution>
  </executions>
</plugin>

To make use of this you first need to execute the build without the bnd-export-maven-plugin so the index is created out of the product build. After that you can create or edit a bndrun file by adding these lines on top:

index: target/index.xml;name="org.fipro.headless.product"

-standalone: ${index}

I am personally not a big fan of such dependencies in the build timeline. But it is surely helpful for creating the bndrun file.

Posted in Dirk Fauth, Eclipse, Java, OSGi | Comments Off on Building a “headless RCP” application with Tycho

POM-less Tycho enhanced

With Tycho 0.24 POM-less Tycho builds were introduced. This Maven extension was a big step forward with regards to build configuration, as plugin, feature and plugin test projects don’t need a dedicated pom.xml file anymore. Therefore there are less pom.xml files that need to be updated with every version increase. Instead these pom.xml files are generated out of the metadata provided via the MANIFEST.MF file at build time.

Although the initial implementation was already a big improvement, it had some flaws:

  • Only plugins, features and test plugins were supported
  • target definition, update site and product builds still needed a dedicated pom.xml file
  • test plugins/bundles needed the suffix .tests
  • in structured environments “POM-less parents” or “connector POMs” were needed to be added manually

With Tycho 1.5 these flaws are finally fixed to further improve POM-less Tycho builds. To make use of those enhancements you need to follow these steps:

  1. Update the version of the tycho-pomless extension in .mvn/extension.xml to 1.5.1
  2. Update the tycho version in the parent pom.xml to 1.5.1 (ideally only in the properties section to avoid changes in multiple locations)
  3. Make the parent pom.xml file resolvable by sub-modules.
    This can be done the following ways:

    1. Place the parent pom.xml file in the root folder of the project structure (default)
    2. Configure the parent POM location globally via system property tycho.pomless.parent which defaults to “..”.
    3. Override the global default by defining tycho.pomless.parent in the build.properties of each individual project.
    4. In pom.xml files that are not generated by the tycho-pomless extension but managed manually (e.g. because of additional build plugins), configure the relativePath for the parent like shown in the following example:
      <parent>
          <groupId>my.group.id</groupId>
          <artifactId>parent</artifactId>
          <version>1.0.0-SNAPSHOT</version>
          <relativePath>../../pom.xml</relativePath>
      </parent>
  4. Delete the pom.xml in the target definition project (if nothing special is configured in there).
  5. Delete the pom.xml in the update site project (if nothing special is configured in there).
  6. Delete the pom.xml in the product project (if nothing special is configured in there).

If you have your project structure setup similar to the structured environments, the following steps need to be performed additionally in order to make POM-less Tycho work correctly:

  1. Change the modules section of the parent pom.xml to only point to the structure folders and not every single module:
    <modules>
        <module>bundles</module>
        <module>tests</module>
        <module>features</module>
        <module>releng</module>
    </modules>

    This will automatically generate the “connector POMs” that point to the parent pom.xml in the module folders. The name of these generated files is .polyglot.pom.tycho and they are removed once the build is finished. The generated “connector POM” files can even be referenced in the relativePath.

    <parent>
        <groupId>my.group.id</groupId>
        <artifactId>bundles</artifactId>
        <version>1.0.0-SNAPSHOT</version>
        <relativePath>../pom.tycho</relativePath>
    </parent>

    The generation of the “connector POMs” has the advantage that new modules can be simply created and added to the build, without the need to update the parent pom.xml modules section. On the other hand it is not possible to skip single modules in the build by removing them from the modules section.

    Note:
    Additionally a file named pom.tycho is generated in each sub-folder, that lists the modules that are detected by the automatic module detection. Looking into the sources it seems like the idea of this file is to separate “connector POM” and module collection, to be able to manually list the modules that should be build. That file should be deleted if it is generated, but if it already exists it should stay untouched.
    While testing in a Windows environment I noticed that somes the pom.tycho files stay as leftovers in the sub-folders even they were generated. This seems to be a bug and I reported it here. In case you see such leftovers that are not intended, make sure you delete them and do not commit them into the repository if you like the generation approach. Otherwise the automatic module detection is not executed and therefore new modules are not added automatically.

  2. Ensure that all modules are placed in a folder structure with the following folder names:
    1. bundles
    2. plugins
    3. tests
    4. features
    5. sites
    6. products
    7. releng

    Note:
    If you have additional folders or folders with different names, they are not taken up for automatic “connector POM” generation. To support additional folder names you can specify the system property tycho.pomless.aggregator.names where the value is a comma separated list of folder names.
    For example, let’s assume instead of a releng folder the build related modules are placed in a folder named build. So instead of releng you would point to build in the modules section. Starting the build now leads to an error saying that there is no pom.xml found in the build folder. Starting the build the following way solves that issue:

    mvn -Dtycho.pomless.aggregator.names=bundles,plugins,tests,features,sites,products,build clean verify

With these enhancements it is now possible to set up a Maven Tycho build for PDE based Eclipse RCP projects with a single pom.xml file.

Note:
The Maven versions 3.6.1 and 3.6.2 are known to fail with the pomless extension. There are issues reported here and here. Both are already fixed so by using Maven 3.6.3 the issues should not be seen anymore.

I would also like to mention that these enhancements where contributed by Christoph Läubrich who wasn’t a committer in the Tycho project at that time. Another good example for the power of open source! So thanks for the contributions to make the POM-less Tycho build more convenient for all of us.

Posted in Dirk Fauth, Eclipse, OSGi | 1 Comment

Add JavaFX controls to a SWT Eclipse 4 application – Eclipse RCP Cookbook UPDATE

I wrote about this topic already a while ago on another blog. But since then quite a few things have changed and I wanted to publish an updated version of that blog post. Because of various reasons I decided to publish it here ;-).


As explained in JavaFX Interoperability with SWT it is possible to embed JavaFX controls in a SWT UI. This is useful for example if you want to softly migrate big applications from SWT to JavaFX or if you need to add animations or special JavaFX controls without completely migrating your application.

The following recipe will show how to integrate JavaFX with an Eclipse 4 application. It will cover the usage of Java 8 with integrated JavaFX, and Java 11 with separate JavaFX 11. The steps for Java 11 should also apply for newer versions of Java and JavaFX.

Cookware

For Java 11 with separate JavaFX 11 the following preparations need to be done:

Ingredients

This recipe uses the Eclipse RCP Cookbook – Basic Recipe. To get started fast with this recipe, we have prepared the basic recipe for you on GitHub.

To use the prepared basic recipe to follow this tutorial, import the project by cloning the Git repository:

  • File → Import → Git → Projects from Git
  • Click Next
  • Select Clone URI
  • Enter URI https://github.com/fipro78/e4-cookbook-basic-recipe.git
  • Click Next
  • Select the master branch
  • Click Next
  • Choose a directory where you want to store the checked out sources
  • Click Next
  • Select Import existing projects
  • Click Next
  • Click Finish

Preparation

Step 1: Update the Target Platform

  • Open the target definition org.fipro.eclipse.tutorial.target.target in the project org.fipro.eclipse.tutorial.target
  • Add a new Software Site by clicking Add… in the Locations section
    • Select Software Site
    • Software Site for the e(fx)clipse 3.6.0 release build
      http://download.eclipse.org/efxclipse/runtime-released/3.6.0/site
    • Expand FX Target and check Minimal JavaFX OSGi integration bundles
      (Runtime extension to add JavaFX support)
  • Optional:
    If you use the RCP e4 Target Platform Feature instead for additional e(fx)clipse features that can be included, you additionally need to add p2 and EMF Edit to the target definition because of transitive dependencies

    • Select the update site http://download.eclipse.org/releases/2019-06
    • Click Edit
    • Check Equinox p2, headless functionalities
    • Check EMF Edit
  • Click Finish
  • Activate the target platform by clicking Set as Target Platform in the upper right corner of the Target Definition Editor

Java 11

If you use Java 11 or greater you need to add an additional update site as explained here.

  • Add a new Software Site by clicking Add… in the Locations section
    • Select Software Site
    • http://downloads.efxclipse.bestsolution.at/p2-repos/openjfx-11/repository/
    • Disable Group by Category as the items are not categorized and check all available items
      • openjfx.media.feature 
      • openjfx.standard.feature
      • openjfx.swing.feature
      • openjfx.swt.feature
      • openjfx.web.feature

Note:
If you are using the Target Definition DSL, the TPD file should look similar to the following snippet which includes the Minimal JavaFX OSGi integration bundles and the RCP e4 Target Platform Feature:

target "E4 Cookbook Target Platform"

with source requirements

location "http://download.eclipse.org/releases/2019-06" {
    org.eclipse.equinox.executable.feature.group
    org.eclipse.sdk.feature.group
    org.eclipse.equinox.p2.core.feature.feature.group
    org.eclipse.emf.edit.feature.group
}

location "http://download.eclipse.org/efxclipse/runtime-released/3.6.0/site" {
    org.eclipse.fx.runtime.min.feature.feature.group
    org.eclipse.fx.target.rcp4.feature.feature.group
}

// only needed for Java 11 with OpenJFX 11
location "http://downloads.efxclipse.bestsolution.at/p2-repos/openjfx-11/repository/" {
    openjfx.media.feature.feature.group
    openjfx.standard.feature.feature.group
    openjfx.swing.feature.feature.group
    openjfx.swt.feature.feature.group
    openjfx.web.feature.feature.group
}

Step 2: Update the Plug-in project

  • Open the InverterPart in the project org.fipro.eclipse.tutorial.inverter
    • Add a javafx.embed.swt.FXCanvas to the parent Composite in InverterPart#postConstruct(Composite)
    • Create an instance of javafx.scene.layout.BorderPane
    • Create a javafx.scene.Scene instance that takes the created BorderPane as root node and sets the background color to be the same as the background color of the parent Shell
    • Set the created javafx.scene.Scene to the FXCanvas
// add FXCanvas for adding JavaFX controls to the UI
FXCanvas canvas = new FXCanvas(parent, SWT.NONE);
GridDataFactory
    .fillDefaults()
    .grab(true, true)
    .span(3, 1)
    .applyTo(canvas);

// create the root layout pane
BorderPane layout = new BorderPane();

// create a Scene instance
// set the layout container as root
// set the background fill to the background color of the shell
Scene scene = new Scene(layout, Color.rgb(
    parent.getShell().getBackground().getRed(),
    parent.getShell().getBackground().getGreen(),
    parent.getShell().getBackground().getBlue()));

// set the Scene to the FXCanvas
canvas.setScene(scene);

Now JavaFX controls can be added to the scene graph via the BorderPane instance.

  • Remove the output control of type org.eclipse.swt.widgets.Text
  • Create an output control of type javafx.scene.control.Label
  • Add the created javafx.scene.control.Label to the center of the BorderPane
javafx.scene.control.Label output = new javafx.scene.control.Label();
layout.setCenter(output);

Add some animations to see some more JavaFX features.

  • Create a javafx.animation.RotateTransition that rotates the output label.
  • Create a javafx.animation.ScaleTransition that scales the output label.
  • Create a javafx.animation.ParallelTransition that combines the RotateTransition and the ScaleTransition. This way both transitions are executed in parallel.
  • Add starting the animation in the SelectionAdapter and the KeyAdapter that are executed for reverting a String.
RotateTransition rotateTransition = 
    new RotateTransition(Duration.seconds(1), output);
rotateTransition.setByAngle(360);

ScaleTransition scaleTransition = 
    new ScaleTransition(Duration.seconds(1), output);
scaleTransition.setFromX(1.0);
scaleTransition.setFromY(1.0);
scaleTransition.setToX(4.0);
scaleTransition.setToY(4.0);

ParallelTransition parallelTransition = 
    new ParallelTransition(rotateTransition, scaleTransition);

button.addSelectionListener(new SelectionAdapter() {
    @Override
    public void widgetSelected(SelectionEvent e) {
        output.setText(StringInverter.invert(input.getText()));
        parallelTransition.play();
    }
});

Step 3: Update the Product Configuration

  • Open the file org.fipro.eclipse.tutorial.app.product in the project org.fipro.eclipse.tutorial.product
  • Switch to the Contents tab and add additional features
    • Option A: Use the Minimal JavaFX OSGi integration bundles
      • org.eclipse.fx.runtime.min.feature
    • Option B: Use the RCP e4 Target Platform Feature
      • org.eclipse.fx.target.rcp4.feature
      • org.eclipse.equinox.p2.core.feature
      • org.eclipse.ecf.core.feature
      • org.eclipse.ecf.filetransfer.feature
      • org.eclipse.emf.edit
  • Switch to the Launching tab
    • Add -Dosgi.framework.extensions=org.eclipse.fx.osgi to the VM Arguments
      (adapter hook to get JavaFX-SWT integration on the classpath)

Java 11:

You also need to add the openjfx features to bundle it with your application:

  • openjfx.media.feature
  • openjfx.standard.feature
  • openjfx.swing.feature
  • openjfx.swt.feature
  • openjfx.web.feature
  • Start the application from within the IDE
    • Open the Product Configuration in the org.fipro.eclipse.tutorial.product project
    • Select the Overview tab
    • Click Launch an Eclipse Application in the Testing section

Note:
If you have org.eclipse.equinox.p2.reconciler.dropins in the Start Levels of the Configuration tab, you also need to add org.eclipse.equinox.p2.extras.feature in the included features of the Contents tab so the product build succeeds in later stages. I personally tend to remove it as dropins have been deprecated by the p2 team quite a while ago.

The started application should look similar to the following screenshot.

Maven Tycho build

To build a deliverable product it is recommended to use Maven Tycho. Using pomless Tycho you only need a single pom.xml file for the build configuration and not one pom.xml file per project. Since Tycho 1.5 this is even true for the target platform, update site and product projects.

To enable the Maven build with pomless Tycho for the example project you need to create two files:

  1. Create e4-cookbook-basic-recipe/.mvn/extension.xml to enable the pomless Tycho extension
    <?xml version="1.0" encoding="UTF-8"?>
    <extensions>
        <extension>
            <groupId>org.eclipse.tycho.extras</groupId>
            <artifactId>tycho-pomless</artifactId>
            <version>1.5.1</version>
        </extension>
    </extensions>
  2. Create e4-cookbook-basic-recipe/pom.xml to configure the Maven build
    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
      <modelVersion>4.0.0</modelVersion>
    
      <groupId>org.fipro.eclipse.tutorial</groupId>
      <artifactId>parent</artifactId>
      <version>1.0.0-SNAPSHOT</version>
    
      <packaging>pom</packaging>
    
      <modules>
        <module>org.fipro.eclipse.tutorial.target</module>
        <module>org.fipro.eclipse.tutorial.inverter</module>
        <module>org.fipro.eclipse.tutorial.app</module>
        <module>org.fipro.eclipse.tutorial.feature</module>
        <module>org.fipro.eclipse.tutorial.product</module>
      </modules>
    
      <properties>
        <tycho-version>1.5.1</tycho-version>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
      </properties>
    
      <build>
        <plugins>
          <plugin>
            <groupId>org.eclipse.tycho</groupId>
            <artifactId>tycho-maven-plugin</artifactId>
            <version>${tycho-version}</version>
            <extensions>true</extensions>
          </plugin>
          <plugin>
            <groupId>org.eclipse.tycho</groupId>
            <artifactId>target-platform-configuration</artifactId>
            <version>${tycho-version}</version>
            <configuration>
              <target>
                <artifact>
                  <groupId>org.fipro.eclipse.tutorial</groupId>
                  <artifactId>org.fipro.eclipse.tutorial.target</artifactId>
                  <version>1.0.0-SNAPSHOT</version>
                </artifact>
              </target>
              <environments>
                <environment>
                  <os>win32</os>
                  <ws>win32</ws>
                  <arch>x86_64</arch>
                </environment>
                <environment>
                  <os>linux</os>
                  <ws>gtk</ws>
                  <arch>x86_64</arch>
                </environment>
                <environment>
                  <os>macosx</os>
                  <ws>cocoa</ws>
                  <arch>x86_64</arch>
                </environment>
              </environments>
            </configuration>
          </plugin>
        </plugins>
    
        <pluginManagement>
          <plugins>
            <plugin>
              <groupId>org.eclipse.tycho</groupId>
              <artifactId>tycho-p2-director-plugin</artifactId>
              <version>${tycho-version}</version>
            </plugin>
          </plugins>
        </pluginManagement>
      </build>
    </project>

As JavaFX is not on the default classpath, the location of the JavaFX libraries need to be configured in the Tycho build for compile time resolution. If the build is executed with Java 8 for Java 8, the following section needs to be added in the pluginManagement section, where the JAVA_HOME environment variable points to your JDK installation:

<plugin>
  <groupId>org.eclipse.tycho</groupId>
  <artifactId>tycho-compiler-plugin</artifactId>
  <version>${tycho-version}</version>
  <configuration>
    <encoding>UTF-8</encoding>
    <extraClasspathElements>
      <extraClasspathElement>
        <groupId>com.oracle</groupId>
        <artifactId>javafx</artifactId>
        <version>8.0.0-SNAPSHOT</version>
        <systemPath>${JAVA_HOME}/jre/lib/jfxswt.jar</systemPath>
        <scope>system</scope>
      </extraClasspathElement>
    </extraClasspathElements>
  </configuration>
</plugin>

Java 11

With Java 11 it is slightly more complicated. On the one hand the OpenJFX libraries are available via Maven Central and can be added as extra classpath elements via Maven. But the javafx-swt module is not available via Maven Central as reported here. That means for OpenJFX 11 following section needs to be added in the pluginManagement section, where the JAVAFX_HOME environment variable points to your OpenJFX installation:

<plugin>
  <groupId>org.eclipse.tycho</groupId>
  <artifactId>tycho-compiler-plugin</artifactId>
  <version>${tycho-version}</version>
  <configuration>
    <encoding>UTF-8</encoding>
    <extraClasspathElements>
      <extraClasspathElement>
        <groupId>org.openjfx</groupId>
        <artifactId>javafx-controls</artifactId>
        <version>11.0.2</version>
      </extraClasspathElement>
      <extraClasspathElement>
        <groupId>org.openjfx</groupId>
        <artifactId>javafx-swt</artifactId>
        <version>11.0.2</version>
        <systemPath>${JAVAFX_HOME}/lib/javafx-swt.jar</systemPath>
        <scope>system</scope>
      </extraClasspathElement>
    </extraClasspathElements>
  </configuration>
</plugin>

Start the build

mvn clean verify

The resulting product variants for each platform is located under
e4-cookbook-basic-recipe/org.fipro.eclipse.tutorial.product/target/products

Note:
If you included the openjfx bundles in your product and start the product with Java 8, the JavaFX 8 classes will be used. If you use Java 11 + to start the application, the classes from the openjfx bundles will be loaded. The e(fx)clipse classloader hook will take care of this.

Currently only OpenJFX 11 is available in the re-bundled form. If you are interested about newer OpenJFX versions you can have a look at the openjfx-osgi repository on GitHub or get in contact with BestSolution.at who created and provide the bundles.

The complete source code of the example can be found on GitHub.

Posted in Dirk Fauth, Eclipse, Java | Comments Off on Add JavaFX controls to a SWT Eclipse 4 application – Eclipse RCP Cookbook UPDATE

OSGi Event Admin – Publish & Subscribe

In this blog post I want to write about the publish & subscribe mechanism in OSGi, provided via the OSGi Event Admin Service. Of course I will show this in combination with OSGi Declarative Services, because this is the technology I currently like very much, as you probably know from my previous blog posts.

I will start with some basics and then show an example as usual. At last I will give some information about how to use the event mechanism in Eclipse RCP development, especially related to the combination between OSGi services and the GUI.

If you want to read further details on the Event Admin Service Specification have a look at the OSGi Spec. In Release 6 it is covered in the Compendium Specification Chapter 113.

Let’s start with the basics. The Event Admin Service is based on the Publish-Subscribe pattern. There is an event publisher and an event consumer. Both do not know each other in any way, which provides a high decoupling. Simplified you could say, the event publisher sends an event to a channel, not knowing if anybody will receive that event. On the other side there is an event consumer ready to receive events, not knowing if there is anybody available for sending events. This simplified view is shown in the following picture:

Technically both sides are using the Event Admin Service in some way. The event publisher uses it directly to send an event to the channel. The event consumer uses it indirectly by registering an event handler to the EventAdmin to receive events. This can be done programmatically. But with OSGi DS it is very easy to register an event handler by using the whiteboard pattern.

Event

An Event object has a topic and some event properties. It is an immutable object to ensure that every handler gets the same object with the same state.

The topic defines the type of the event and is intended to serve as first-level filter for determining which handlers should receive the event. It is a String arranged in a hierarchical namespace. And the recommendation is to use a convention similar to the Java package name scheme by using reverse domain names (fully/qualified/package/ClassName/ACTION). Doing this ensures uniqueness of events. This is of course only a recommendation and you are free to use pseudo class names to make the topic better readable.

Event properties are used to provide additional information about the event. The key is a String and the value can be technically any object. But it is recommended to only use String objects and primitive type wrappers. There are two reasons for this:

  1. Other types cannot be passed to handlers that reside external from the Java VM.
  2. Other classes might be mutable, which means any handler that receives the event could change values. This break the immutability rule for events.

Common Bundle

It is some kind of best practice to place common stuff in a common bundle to which the event publisher bundle and the event consumer bundle can have a dependency to. In our case this will only be the definition of the supported topics and property keys in a constants class, to ensure that both implementations share the same definition, without the need to be dependent on each other.

  • Create a new project org.fipro.mafia.common
  • Create a new package org.fipro.mafia.common
  • Create a new class MafiaBossConstants
public final class MafiaBossConstants {

    private MafiaBossConstants() {
        // private default constructor for constants class
        // to avoid someone extends the class
    }

    public static final String TOPIC_BASE = "org/fipro/mafia/Boss/";
    public static final String TOPIC_CONVINCE = TOPIC_BASE + "CONVINCE";
    public static final String TOPIC_ENCASH = TOPIC_BASE + "ENCASH";
    public static final String TOPIC_SOLVE = TOPIC_BASE + "SOLVE";
    public static final String TOPIC_ALL = TOPIC_BASE + "*";

    public static final String PROPERTY_KEY_TARGET = "target";

}
  • PDE
    • Open the MANIFEST.MF file and on the Overview tab set the Version to 1.0.0 (remove the qualifier).
    • Switch to the Runtime tab and export the org.fipro.mafia.common package.
    • Specify the version 1.0.0 on the package via Properties…
  • Bndtools
    • Open the bnd.bnd file
    • Add the package org.fipro.mafia.common to the Export Packages

In MafiaBossConstants we specify the topic base with a pseudo class org.fipro.mafia.Boss, which results in the topic base org/fipro/mafia/Boss. We specify action topics that start with the topic base and end with the actions CONVINCE, ENCASH and SOLVE. And additionally we specify a topic that starts with the base and ends with the wildcard ‘*’.

These constants will be used by the event publisher and the event consumer soon.

Event Publisher

The Event Publisher uses the Event Admin Service to send events synchronously or asynchronously. Using DS this is pretty easy.

We will create an Event Publisher based on the idea of a mafia boss. The boss simply commands a job execution and does not care who is doing it. Also it is not of interest if there are many people doing the same job. The job has to be done!

  • Create a new project org.fipro.mafia.boss
  • PDE
    • Open the MANIFEST.MF file of the org.fipro.mafia.boss project and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.fipro.mafia.common (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
      • org.osgi.service.event (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties…
    • Add the upper version boundaries to the Import-Package statements.
  • Bndtools
    • Open the bnd.bnd file of the org.fipro.mafia.boss project and switch to the Build tab
    • Add the following bundles to the Build Path
      • org.apache.felix.eventadmin
      • org.fipro.mafia.common

Note:
Adding org.osgi.service.event to the Imported Packages with PDE on a current Equinox target will provide a package version 1.3.1. You need to change this to 1.3.0 if you intend to run the same bundle with a different Event Admin Service implementation. In general it is a bad practice to rely on a bugfix version. Especially when thinking about interfaces, as any change to an interface typically is a breaking change.
To clarify the statement above. As the package org.osgi.service.event contains more than just the EventAdmin interface, the bugfix version increase is surely correct in Equinox, as there was probably a bugfix in some code inside the package. The only bad thing is to restrict the package wiring on the consumer side to a bugfix version, as this would restrict your code to only run with the Equinox implementation of the Event Admin Service.

  • Create a new package org.fipro.mafia.boss
  • Create a new class BossCommand
@Component(
    property = {
        "osgi.command.scope=fipro",
        "osgi.command.function=boss" },
    service = BossCommand.class)
public class BossCommand {

    @Reference
    EventAdmin eventAdmin;

    @Descriptor("As a mafia boss you want something to be done")
    public void boss(
        @Descriptor("the command that should be executed. "
            + "possible values are: convince, encash, solve")
        String command,
        @Descriptor("who should be 'convinced', "
            + "'asked for protection money' or 'finally solved'")
        String target) {

        // create the event properties object
        Map<String, Object> properties = new HashMap<>();
        properties.put(MafiaBossConstants.PROPERTY_KEY_TARGET, target);
        Event event = null;

        switch (command) {
            case "convince":
                event = new Event(MafiaBossConstants.TOPIC_CONVINCE, properties);
                break;
            case "encash":
                event = new Event(MafiaBossConstants.TOPIC_ENCASH, properties);
                break;
            case "solve":
                event = new Event(MafiaBossConstants.TOPIC_SOLVE, properties);
                break;
            default:
                System.out.println("Such a command is not known!");
        }

        if (event != null) {
            eventAdmin.postEvent(event);
        }
    }
}

Note:
The code snippet above uses the annotation @Descriptor to specify additional information for the command. This information will be shown when executing help boss in the OSGi console. To make this work with PDE you need to import the package org.apache.felix.service.command with status=provisional. Because the PDE editor does not support adding additional information to package imports, you need to do this manually in the MANIFEST.MF tab of the Plugin Manifest Editor. The Import-Package header would look like this:

Import-Package: org.apache.felix.service.command;status=provisional;version="0.10.0",
 org.fipro.mafia.common;version="[1.0.0,2.0.0)",
 org.osgi.service.component.annotations;version="[1.3.0,2.0.0)";resolution:=optional,
 org.osgi.service.event;version="[1.3.0,2.0.0)"

With Bndtools you need to add org.apache.felix.gogo.runtime to the Build Path in the bnd.bnd file so the @Descriptor annotation can be resolved.

There are three things to notice in the BossCommand implementation:

  • There is a mandatory reference to EventAdmin which is required for sending events.
  • The Event objects are created using a specific topic and a Map<String, Object> that contains the additional event properties.
  • The event is sent asynchronously via EventAdmin#postEvent(Event)

The BossCommand will create an event using the topic that corresponds to the given command parameter. The target parameter will be added to a map that is used as event properties. This event will then be send to a channel via the EventAdmin. In the example we use EventAdmin#postEvent(Event) which sends the event asynchronously. That means, we send the event but do not wait until available handlers have finished the processing. If it is required to wait until the processing is done, you need to use EventAdmin#sendEvent(Event), which sends the event synchronously. But sending events synchronously is significantly more expensive, as the Event Admin Service implementation needs to ensure that every handler has finished processing before it returns. It is therefore recommended to prefer the usage of asynchronous event processing.

Note:
The code snippet uses the Field Strategy for referencing the EventAdmin. If you are using PDE this will work with Eclipse Oxygen. With Eclipse Neon you need to use the Event Strategy. In short, you need to write the bind-event-method for referencing EventAdmin because Equinox DS supports only DS 1.2 and the annotation processing in Eclipse Neon also only supports the DS 1.2 style annotations.

Event Consumer

In our example the boss does not have to tell someone explicitly to do the job. He just mentions that the job has to be done. Let’s assume we have a small organization without hierarchies. So we skip the captains etc. and simply implement some soldiers. They have specialized, so we have three soldiers, each listening to one special topic.

  • Create a new project org.fipro.mafia.soldier
  • PDE
    • Open the MANIFEST.MF file of the org.fipro.mafia.soldier project and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.fipro.mafia.common (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
      • org.osgi.service.event (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties…
    • Add the upper version boundaries to the Import-Package statements.
  • Bndtools
    • Open the bnd.bnd file of the org.fipro.mafia.boss project and switch to the Build tab
    • Add the following bundles to the Build Path
      • org.apache.felix.eventadmin
      • org.fipro.mafia.common
  • Create a new package org.fipro.mafia.soldier
  • Create the following three soldiers Luigi, Mario and Giovanni
@Component(
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_CONVINCE)
public class Luigi implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println("Luigi: "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)
        + " was 'convinced' to support our family");
    }

}
@Component(
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_ENCASH)
public class Mario implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println("Mario: "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)
        + " payed for protection");
    }

}
@Component(
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_SOLVE)
public class Giovanni implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println("Giovanni: We 'solved' the issue with "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET));
    }

}

Technically we have created special EventHandler for different topics. You should notice the following facts:

  • We are using OSGi DS to register the event handler using the whiteboard pattern. On the consumer side we don’t need to know the EventAdmin itself.
  • We need to implement org.osgi.service.event.EventHandler
  • We need to register for a topic via service property event.topics, otherwise the handler will not listen for any event.
  • Via Event#getProperty(String) we are able to access event property values.

The following service properties are supported by event handlers:

Service Registration Property Description
event.topics Specify the topics of interest to an EventHandler service. This property is mandatory.
event.filter Specify a filter to further select events of interest to an EventHandler service. This property is optional.
event.delivery Specifying the delivery qualities requested by an EventHandler service. This property is optional.

The property keys and some default keys for event properties are specified in org.osgi.service.event.EventConstants.

Launch the example

Before moving on and explaining further, let’s start the example and verify that each command from the boss is only handled by one soldier.

With PDE perform the following steps:

  • Select the menu entry Run -> Run Configurations…
  • In the tree view, right click on the OSGi Framework node and select New from the context menu
  • Specify a name, e.g. OSGi Event Mafia
  • Deselect All
  • Select the following bundles
    (note that we are using Eclipse Oxygen, in previous Eclipse versions org.apache.felix.scr and org.eclipse.osgi.util are not required)

    • Application bundles
      • org.fipro.mafia.boss
      • org.fipro.mafia.common
      • org.fipro.mafia.soldier
    • Console bundles
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.runtime
      • org.apache.felix.gogo.shell
      • org.eclipse.equinox.console
    • OSGi framework and DS bundles
      • org.apache.felix.scr
      • org.eclipse.equinox.ds
      • org.eclipse.osgi
      • org.eclipse.osgi.services
      • org.eclipse.osgi.util
    • Equinox Event Admin
      • org.eclipse.equinox.event
  • Ensure that Default Auto-Start is set to true
  • Click Run

With Bndtools perform the following steps:

  • Open the launch.bndrun file in the org.fipro.mafia.boss project
  • On the Run tab add the following bundles to the Run Requirements
    • org.fipro.mafia.boss
    • org.fipro.mafia.common
    • org.fipro.mafia.soldier
  • Click Resolve to ensure all required bundles are added to the Run Bundles via auto-resolve
  • Click Run OSGi

Execute the boss command to see the different results. This can look similar to the following:

osgi> boss convince Angelo
osgi> Luigi: Angelo was 'convinced' to support our family
boss encash Wong
osgi> Mario: Wong payed for protection
boss solve Tattaglia
osgi> Giovanni: We 'solved' the issue with Tattaglia

Handle multiple event topics

It is also possible to register for multiple event topics. Say Pete is a tough guy who is good in ENCASH and SOLVE issues. So he registers for those topics.

@Component(
    property = {
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_CONVINCE,
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_SOLVE })
public class Pete implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        System.out.println("Pete: I took care of "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET));
    }

}

As you can see the service property event.topics is declared multiple times via the @Component annotation type element property. This way an array of Strings is configured for the service property, so the handler reacts on both topics.

If you execute the example now and call boss convince xxx or boss solve xxx you will notice that Pete is also responding.

It is also possible to use the asterisk wildcard as last token of a topic. This way the handler will receive all events for topics that start with the left side of the wildcard.

Let’s say we have a very motivated young guy called Ray who wants to prove himself to the boss. So he takes every command from the boss. For this we set the service property event.topics=org/fipro/mafia/Boss/*

@Component(
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_ALL)
public class Ray implements EventHandler {

    @Override
    public void handleEvent(Event event) {
        String topic = event.getTopic();
        Object target = event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET);

        switch (topic) {
            case MafiaBossConstants.TOPIC_CONVINCE:
                System.out.println("Ray: I helped in punching the shit out of" + target);
                break;
            case MafiaBossConstants.TOPIC_ENCASH:
                System.out.println("Ray: I helped getting the money from " + target);
                break;
            case MafiaBossConstants.TOPIC_SOLVE:
                System.out.println("Ray: I helped killing " + target);
                break;
            default: System.out.println("Ray: I helped with whatever was requested!");
        }
    }

}

Executing the example again will show that Ray is responding on every boss command.

It is also possible to filter events based on event properties by setting the service property event.filter. The value needs to be an LDAP filter. For example, although Ray is a motivated and loyal soldier, he refuses to handle events that target his friend Sonny.

The following snippet shows how to specify a filter that excludes event processing if the target is Sonny.

@Component(
    property = {
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_ALL,
        EventConstants.EVENT_FILTER + "=" + "(!(target=Sonny))"})
public class Ray implements EventHandler {

Execute the example and call two commands:

  • boss solve Angelo
  • boss solve Sonny

You will notice that Ray will respond on the first call, but he will not show up on the second call.

Note:
The filter expression can only be applied on event properties. It is not possible to use that filter on service properties.

At last it is possible to configure in which order the event handler wants the events to be delivered. This can either be ordered in the same way they are posted, or unordered. The service property event.delivery can be used to change the default behavior, which is to receive the events from a single thread in the same order as they were posted.

If an event handler does not need to receive events in the order as they were posted, you need to specify the service property event.delivery=async.unordered.

@Component(
    property = {
        EventConstants.EVENT_TOPIC + "="
            + MafiaBossConstants.TOPIC_ALL,
        EventConstants.EVENT_FILTER + "="
            + "(!(target=Sonny))",
        EventConstants.EVENT_DELIVERY + "="
            + EventConstants.DELIVERY_ASYNC_UNORDERED})

The value for ordered delivery is async.ordered which is the default. The values are also defined in the EventConstants.

Capabilities

By using the event mechanism the code is highly decoupled. In general this is a good thing, but it also makes it hard to identify issues. One common issue in Eclipse RCP for example is to forget to automatically start the bundle org.eclipse.equinox.event. Things will simply not work in such a case, without any errors or warnings shown on startup.

The reason for this is that the related interfaces like EventAdmin and EventHandler are located in the bundle org.eclipse.osgi.services. The bundle wiring therefore shows that everything is ok on startup, because all interfaces and classes are available. But we require a bundle that contains an implementation of EventAdmin. If you remember my Getting Started Tutorial, such a requirement can be specified by using capabilities.

To show the implications, let’s play with the Run Configuration:

  • Uncheck org.eclipse.equinox.event from the list of bundles
  • Launch the configuration
  • execute lb on the command line (or ss on Equinox if you are more familiar with that) and check the bundle states
    • Notice that all bundles are in ACTIVE state
  • execute scr:list (or list on Equinox < Oxygen) to check the state of the DS components
    • Notice that org.fipro.mafia.boss.BossCommand has an unsatisfied reference
    • Notice that all other EventHandler services are satisfied

That is of course a the correct behavior. The BossCommand service has a mandatory reference to EventAdmin and there is no such service available. So it has an unsatisfied reference. The EventHandler implementations do not have such a dependency, so they are satisfied. And that is even fine when thinking in the publish & subscribe pattern. They can be active and waiting for events to process, even if there is nobody available to send an event. But it makes it hard to find the issue. And when using Tycho and the Surefire Plugin to execute tests, it will even never work because nobody tells the test runtime that org.eclipse.equinox.event needs to be available and started in advance.

This can be solved by adding the Require-Capability header to require an osgi.service for objectClass=org.osgi.service.event.EventAdmin.

Require-Capability: osgi.service;
 filter:="(objectClass=org.osgi.service.event.EventAdmin)"

By specifying the Require-Capability header like above, the capability will be checked when the bundles are resolved. So starting the example after the Require-Capability header was added will show an error and the bundle org.fipro.mafia.boss will not be activated.

If you add the bundle org.eclipse.equinox.event again to the Run Configuration and launch it again, there are no issues.

As p2 does still not support OSGi capabilities, the p2.inf file needs to be created in the META-INF folder with the following content:

requires.1.namespace = osgi.service
requires.1.name = org.osgi.service.event.EventAdmin

Typically you would specify the Require-Capability to the EventAdmin service with the directive effective:=active. This implies that the OSGi framework will resolve the bundle without checking if another bundle provides the capability. It can then be more seen as a documentation which services are required from looking into the MANIFEST.MF.

Important Note:
Specifying the Require-Capability header and the p2 capabilities for org.osgi.service.event.EventAdmin will only work with Eclipse Oxygen. I contributed the necessary changes to Equinox for Oxygen M1 with Bug 416047. With a org.eclipse.equinox.event bundle in a version >= 1.4.0 you should be able to specify the capabilities. In previous versions the necessary Provide-Capability and p2 capability configuration in that bundle are missing.

Handling events in Eclipse RCP UI

When looking at the architecture of an Eclipse RCP application, you will notice that the UI layer is not created via OSGi DS (actually that is not a surprise!). And we can not simply say that our view parts are created via DS, because the lifecycle of a part is controlled by other mechanics. But as an Eclipse RCP application is typcially an application based on OSGi, all the OSGi mechanisms can be used. Of course not that convenient as with using OSGi DS directly.

The direction from the UI layer to the OSGi service layer is pretty easy. You simply need to retrieve the service you want to uw3. With Eclipse 4 you simply get the desired service injected using @Inject or @Inject in combination with @Service since Eclipse Oxygen (see OSGi Declarative Services news in Eclipse Oxygen). With Eclipse 3.x you needed to retrieve the service programmatically via the BundleContext.

The other way to communicate from a service to the UI layer is something different. There are two ways to consider from my point of view:

This blog post is about the event mechanism in OSGi, so I don’t want to go in detail with the observer pattern approach. It simply means that you extend the service interface to accept listeners to perform callbacks. Which in return means you need to retrieve the service in the view part for example, and register a callback function from there.

With the Publish & Subscribe pattern we register an EventHandler that reacts on events. It is a similar approach to the Observer pattern, with some slight differences. But this is not a design pattern blog post, we are talking about the event mechanism. And we already registered an EventHandler using OSGi DS. The difference to the scenario using DS is that we need to register the EventHandler programmatically. For OSGi experts that used the event mechanism before DS came up, this is nothing new. For all others that learn about it, it could be interesting.

The following snippet shows how to retrieve a BundleContext instance and register a service programmatically. In earlier days this was done in an Activator, as there you have access to the BundleContext. Nowadays it is recommended to use the FrameworkUtil class to retrieve the BundleContext when needed, and to avoid Activators to reduce startup time.

private ServiceRegistration<?> eventHandler;

...

// retrieve the bundle of the calling class
Bundle bundle = FrameworkUtil.getBundle(getClass());
BundleContext bc = (bundle != null) ? bundle.getBundleContext() : null;
if (bc != null) {
    // create the service properties instance
    Dictionary<String, Object> properties = new Hashtable<>();
    properties.put(EventConstants.EVENT_TOPIC, MafiaBossConstants.TOPIC_ALL);
    // register the EventHandler service
    eventHandler = bc.registerService(
        EventHandler.class.getName(),
        new EventHandler() {

            @Override
            public void handleEvent(Event event) {
                // ensure to update the UI in the UI thread
                Display.getDefault().asyncExec(() -> handlerLabel.setText(
                        "Received boss command "
                            + event.getTopic()
                            + " for target "
                            + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)));
            }
        },
        properties);
}

This code can be technically added anywhere in the UI code, e.g. in a view, an editor or a handler. But of course you should be aware that the event handler also should be unregistered once the connected UI class is destroyed. For example, you implement a view part that registers a listener similar to the above to update the UI everytime an event is received. That means the handler has a reference to a UI element that should be updated. If the part is destroyed, also the UI element is destroyed. If you don’t unregister the EventHandler when the part is destroyed, it will still be alive and react on events and probably cause exceptions without proper disposal checks. It is also a cause for memory leaks, as the EventHandler references a UI element instance that is already disposed but can not be cleaned up by the GC as it is still referenced.

Note:
The event handling is executed in its own event thread. Updates to the UI can only be performed in the main or UI thread, otherwise you will get a SWTException for Invalid thread access. Therefore it is necessary to ensure that UI updates performed in an event handler are executed in the UI thread. For further information have a look at Eclipse Jobs and Background Processing.
For the UI synchronization you should also consider using asynchronous execution via Display#asyncExec() or UISynchronize#asyncExec(). Using synchronous execution via syncExec() will block the event handler thread until the UI update is done.

If you stored the ServiceRegistration object returned by BundleContext#registerService() as shown in the example above, the following snippet can be used to unregister the handler if the part is destroyed:

if (eventHandler != null) {
    eventHandler.unregister();
}

In Eclipse 3.x this needs to be done in the overriden dispose() method. In Eclipse 4 it can be done in the method annotated with @PreDestroy.

Note:
Ensure that the bundle that contains the code is in ACTIVE state so there is a BundleContext. This can be achieved by setting Bundle-ActivationPolicy: lazy in the MANIFEST.MF.

Handling events in Eclipse RCP UI with Eclipse 4

In Eclipse 4 the event handling mechanism is provided to the RCP development via the EventBroker. The EventBroker is a service that uses the EventAdmin and additionally provides injection support. To learn more about the EventBroker and the event mechanism provided by Eclipse 4 you should read the related tutorials, like

We are focusing on the event consumer here. Additionally to registering the EventHandler programmatically, it is possible in Eclipse 4 to specify a method for method injection that is called on event handling by additionally providing support for injection.

Such an event handler method looks similar to the following snippet:

@Inject
@Optional
void handleConvinceEvent(
        @UIEventTopic(MafiaBossConstants.TOPIC_CONVINCE) String target) {
    e4HandlerLabel.setText("Received boss CONVINCE command for " + target);
}

By using @UIEventTopic you ensure that the code is executed in the UI thread. If you don’t care about the UI thread, you can use @EventTopic instead. The handler that is registered in the back will also be automatically unregistered if the containing instance is destroyed.

While the method gets directly invoked as event handler, the injection does not work without modifications on the event producer side. For this the data that should be used for injection needs to be added to the event properties for the key org.eclipse.e4.data. This key is specified as a constant in IEventBroker. But using the constant would also introduce a dependency to org.eclipse.e4.core.services, which is not always intended for event producer bundles. Therefore modifying the generation of the event properties map in BossCommand will make the E4 event handling injection work:

// create the event properties object
Map<String, Object> properties = new HashMap<>();
properties.put(MafiaBossConstants.PROPERTY_KEY_TARGET, target);
properties.put("org.eclipse.e4.data", target);

Note:
The EventBroker additionally adds the topic to the event properties for the key event.topics. In Oxygen it does not seem to be necessary anymore.

The sources for this tutorial are hosted on GitHub in the already existing projects:

The PDE version also includes a sample project org.fipro.mafia.ui which is a very simple RCP application that shows the usage of the event handler in a view part.

Posted in Dirk Fauth, Eclipse, Java, OSGi | 2 Comments

Access OSGi Services via web interface

In this blog post I want to share a simple approach to make OSGi services available via web interface. I will show a simple approach that includes the following:

  • Embedding a Jetty  Webserver in an OSGi application
  • Registering a Servlet via OSGi DS using the HTTP Whiteboard specification

I will only cover this simple scenario here and will not cover accessing OSGi services via REST interface. If you are interested in that you might want to look at the OSGi – JAX-RS Connector, which looks also very nice. Maybe I will look at this in another blog post. For now I will focus on embedding a Jetty Server and deploy some resources.

I will skip the introduction on OSGi DS and extend the examples from my Getting Started with OSGi Declarative Services blog. It is easier to follow this post when done the other tutorial first, but it is not required if you adapt the contents here to your environment.

As a first step create a new project org.fipro.inverter.http. In this project we will add the resources created in this tutorial. If you use PDE you should create a new Plug-in Project, with Bndtools create a new Bnd OSGi Project using the Component Development template.

PDE – Target Platform

In PDE it is best practice to create a Target Definition so the work is based on a specific set of bundles and we don’t need to install bundles in our IDE. Follow these steps to create a Target Definition for this tutorial:

  • Create a new target definition
    • Right click on project org.fipro.inverter.http → New → Other… → Plug-in Development → Target Definition
    • Set the filename to org.fipro.inverter.http.target
    • Initialize the target definition with: Nothing: Start with an empty target definition
  • Add a new Software Site in the opened Target Definition Editor by clicking Add… in the Locations section
    • Select Software Site
    • Software Site http://download.eclipse.org/releases/oxygen
    • Disable Group by Category
    • Select the following entries
      • Equinox Core SDK
      • Equinox Compendium SDK
      • Jetty Http Server Feature
    • Click Finish
  • Optional: Add a new Software Site to include JUnit to the Target Definition (only needed in case you followed all previous tutorials on OSGi DS or want to integrate JUnit tests for your services)
    • Software Site http://download.eclipse.org/tools/orbit/R-builds/R20170307180635/repository
    • Select JUnit Testing Framework
    • Click Finish
  • Save your work and activate the target platform by clicking Set as Target Platform in the upper right corner of the Target Definition Editor

Bndtools – Repository

Using Bndtools is different as you already know if you followed my previous blog posts. To be also able to follow this blog post by using Bndtools, I will describe the necessary steps here.

We will use Apache Felix in combination with Bndtools instead of Equinox. This way we don’t need to modify the predefined repository and can start without further actions. The needed Apache Felix bundles are already available.

PDE – Prepare project dependencies

We will prepare the project dependencies in advance so it is easier to copy and paste the code samples to the project. Within the Eclipse IDE the Quick Fixes would also support adding the dependencies afterwards of course.

  • Open the MANIFEST.MF file of the org.fipro.inverter.http project and switch to the Dependencies tab
  • Add the following two dependencies on the Imported Packages side:
    • javax.servlet (3.1.0)
    • javax.servlet.http (3.1.0)
    • org.fipro.inverter (1.0.0)
    • org.osgi.service.component.annotations (1.3.0)
  • Mark org.osgi.service.component.annotations as Optional via Properties…
  • Add the upper version boundaries to the Import-Package statements.

Bndtools – Prepare project dependencies

  • Open the bnd.bnd file of the org.fipro.inverter.http project and switch to the Build tab
  • Add the following bundles to the Build Path
    • org.apache.http.felix.jetty
    • org.apache.http.felix.servlet-api
    • org.fipro.inverter.api

Create a Servlet implementation

  • Create a new package org.fipro.inverter.http
  • Create a new class InverterServlet
@Component(
    service=Servlet.class,
    property= "osgi.http.whiteboard.servlet.pattern=/invert",
    scope=ServiceScope.PROTOTYPE)
public class InverterServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Reference
    private StringInverter inverter;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        String input = req.getParameter("value");
        if (input == null) {
            throw new IllegalArgumentException("input can not be null");
        }
        String output = inverter.invert(input);

        resp.setContentType("text/html");
        resp.getWriter().write(
            "<html><body>Result is " + output + "</body></html>");
        }

}

Let’s look at the implementation:

  1. It is a typical Servlet implementation that extends javax.servlet.http.HttpServlet
  2. It is also an OSGi Declarative Service that is registered as service of type javax.servlet.Servlet
  3. The service has PROTOTYPE scope
  4. A special property osgi.http.whiteboard.servlet.pattern is set. This configures the context path of the Servlet.
  5. It references the StringInverter OSGi service from the previous tutorial via field reference. And yes since Eclipse Oxygen this is also supported in Equinox (I wrote about this here).

PDE – Launch the example

Before explaining the details further, launch the example to see if our servlet is available via standard web browser. For this we create a launch configuration, so we can start directly from the IDE.

  • Select the menu entry Run -> Run Configurations…
  • In the tree view, right click on the OSGi Framework node and select New from the context menu
  • Specify a name, e.g. OSGi Inverter Http
  • Deselect All
  • Select the following bundles
    (note that we are using Eclipse Oxygen, in previous Eclipse versions org.apache.felix.scr and org.eclipse.osgi.util are not required)

    • Application bundles
      • org.fipro.inverter.api
      • org.fipro.inverter.http
      • org.fipro.inverter.provider
    • Console bundles
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.runtime
      • org.apache.felix.gogo.shell
      • org.eclipse.equinox.console
    • OSGi framework and DS bundles
      • org.apache.felix.scr
      • org.eclipse.equinox.ds
      • org.eclipse.osgi
      • org.eclipse.osgi.services
      • org.eclipse.osgi.util
    • Equinox Http Service and Http Whiteboard
      • org.eclipse.equinox.http.jetty
      • org.eclipse.equinox.http.servlet
    • Jetty
      • javax.servlet
      • org.eclipse.jetty.continuation
      • org.eclipse.jetty.http
      • org.eclipse.jetty.io
      • org.eclipse.jetty.security
      • org.eclipse.jetty.server
      • org.eclipse.jetty.servlet
      • org.eclipse.jetty.util
  • Ensure that Default Auto-Start is set to true
  • Switch to the Arguments tab
    • Add -Dorg.osgi.service.http.port=8080 to the VM arguments
  • Click Run

Note:
If you include the above bundles in an Eclipse RCP application, ensure that you auto-start the org.eclipse.equinox.http.jetty bundle to automatically start the Jetty server. This can be done on the Configuration tab of the Product Configuration Editor.

If you now open a browser and go to the URL http://localhost:8080/invert?value=Eclipse you should get a response with the inverted output.

Bndtools – Launch the example

  • Open the launch.bndrun file in the org.fipro.inverter.http project
  • On the Run tab add the following bundles to the Run Requirements
    • org.fipro.inverter.http
    • org.fipro.inverter.provider
    • org.apache.felix.http.jetty
  • Click Resolve to ensure all required bundles are added to the Run Bundles via auto-resolve
  • Add -Dorg.osgi.service.http.port=8080 to the JVM Arguments
  • Click Run OSGi

Http Service & Http Whiteboard

Now why is this simply working? We only implemented a servlet and provided it as OSGi DS. And it is “magically” available via web interface. The answer to this is the OSGi Http Service Specification and the Http Whiteboard Specification. The OSGi Compendium Specification R6 contains the Http Service Specification Version 1.2 (Chapter 102 – Page 45) and the Http Whiteboard Specification Version 1.0 (Chapter 140 – Page 1067).

The purpose of the Http Service is to provide access to services on the internet or other networks for example by using a standard web browser. This can be done by registering servlets or resources to the Http Service. Without going too much into detail, the implementation is similar to an embedded web server, which is the reason why the default implementations in Equinox and Felix are based on Jetty.

To register servlets and resources to the Http Service you know the Http Service API very well and you need to retrieve the Http Service and directly operate on it. As this is not every convenient, the Http Whiteboard Specification was introduced. This allows to register servlets and resources via the Whiteboard Pattern, without the need to know the Http Service API in detail. I always think about the whiteboard pattern as a “don’t call us, we will call you” pattern. That means you don’t need to register servlets on the Http Service directly, you will provide it as a service to the service registry, and the Http Whiteboard implementation will take it and register it to the Http Service.

Via Http Whiteboard it is possible to register:

  • Servlets
  • Servlet Filters
  • Resources
  • Servlet Listeners

I will show some examples to be able to play around with the Http Whiteboard service.

Register Servlets

An example on how to register a servlet via Http Whiteboard is shown above. The main points are:

  • The servlet needs to be registered as OSGi service of type javax.servlet.Servlet.
  • The component property osgi.http.whiteboard.servlet.pattern needs to be set to specify the request mappings.
  • The service scope should be PROTOTYPE.

For registering servlets the following component properties are supported. (see OSGi Compendium Specification Release 6 – Table 140.4):

Component Property Description
osgi.http.whiteboard.servlet.asyncSupported Declares whether the servlet supports the asynchronous operation mode. Allowed values are true and false independent of case. Defaults to false.
osgi.http.whiteboard.servlet.errorPage Register the servlet as an error page for the error code and/or exception specified; the value may be a fully qualified exception type name or a three-digit HTTP status code in the range 400-599. Special values 4xx and 5xx can be used to match value ranges. Any value not being a three-digit number is assumed to be a fully qualified exception class name.
osgi.http.whiteboard.servlet.name The name of the servlet. This name is used as the value of the javax.servlet.ServletConfig.getServletName()
method and defaults to the fully qualified class name of the service object.
osgi.http.whiteboard.servlet.pattern Registration pattern(s) for the servlet.
servlet.init.* Properties starting with this prefix are provided as init parameters to the javax.servlet.Servlet.init(ServletConfig) method. The servlet.init. prefix is removed from the parameter name.

The Http Whiteboard service needs to call javax.servlet.Servlet.init(ServletConfig) to initialize the servlet before it starts to serve requests, and when it is not needed anymore javax.servlet.Servlet.destroy() to shut down the servlet. If more than one Http Whiteboard implementation is available in a runtime, the init() and destroy() calls would be executed multiple times, which violates the Servlet specification. It is therefore recommended to use the PROTOTYPE scope for servlets to ensure that every Http Whiteboard implementation gets its own service instance.

Note:
In a controlled runtime, like an RCP application that is delivered with one Http Whiteboard implementation and that does not support installing bundles at runtime, the usage of the PROTOTYPE scope is not required. Actually such a runtime ensures that the servlet is only instantiated and initialized once. But if possible it is recommended that the PROTOTYPE scope is used.

To register a servlet as an error page, the service property osgi.http.whiteboard.servlet.errorPage needs to be set. The value can be either a three-digit  HTTP error code, the special codes 4xx or 5xx to specify a range or error codes, or a fully qualified exception class name. The service property osgi.http.whiteboard.servlet.pattern is not required for servlets that provide error pages.

The following snippet shows an error page servlet that deals with IllegalArgumentExceptions and the HTTP error code 500. It can be tested by calling the inverter servlet without a query parameter.

@Component(
    service=Servlet.class,
    property= {
        "osgi.http.whiteboard.servlet.errorPage=java.lang.IllegalArgumentException",
        "osgi.http.whiteboard.servlet.errorPage=500"
    },
    scope=ServiceScope.PROTOTYPE)
public class ErrorServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        resp.setContentType("text/html");
        resp.getWriter().write(
        "<html><body>You need to provide an input!</body></html>");
    }
}

Register Filters

Via servlet filters it is possible to intercept servlet invocations. They are used to modify the ServletRequest and ServletResponse to perform common tasks before and after the servlet invocation.

The example below shows a servlet filter that adds a simple header and footer on each request to the servlet with the /invert pattern:

@Component(
    property = "osgi.http.whiteboard.filter.pattern=/invert",
    scope=ServiceScope.PROTOTYPE)
public class SimpleServletFilter implements Filter {

    @Override
    public void init(FilterConfig filterConfig)
            throws ServletException { }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
            throws IOException, ServletException {
        response.setContentType("text/html");
        response.getWriter().write("<b>Inverter Servlet</b><p>");
        chain.doFilter(request, response);
        response.getWriter().write("</p><i>Powered by fipro</i>");
    }

    @Override
    public void destroy() { }

}

To register a servlet filter the following criteria must match:

  • It needs to be registered as OSGi service of type javax.servlet.Filter.
  • One of the given component properties needs to be set:
    • osgi.http.whiteboard.filter.pattern
    • osgi.http.whiteboard.filter.regex
    • osgi.http.whiteboard.filter.servlet
  • The service scope should be PROTOTYPE.

For registering servlet filters the following service properties are supported. (see OSGi Compendium Specification Release 6 – Table 140.5):

Service Property Description
osgi.http.whiteboard.filter.asyncSupported Declares whether the servlet filter supports asynchronous operation mode. Allowed values are true and false independent of case. Defaults to false.
osgi.http.whiteboard.filter.dispatcher Select the dispatcher configuration when the
servlet filter should be called. Allowed string values are REQUEST, ASYNC, ERROR, INCLUDE, and FORWARD. The default for a filter is REQUEST.
osgi.http.whiteboard.filter.name The name of a servlet filter. This name is used as the value of the FilterConfig.getFilterName() method and defaults to the fully qualified class name of the service object.
osgi.http.whiteboard.filter.pattern Apply this servlet filter to the specified URL path patterns. The format of the patterns is specified in the servlet specification.
osgi.http.whiteboard.filter.regex Apply this servlet filter to the specified URL paths. The paths are specified as regular expressions following the syntax defined in the java.util.regex.Pattern class.
osgi.http.whiteboard.filter.servlet Apply this servlet filter to the referenced servlet(s) by name.
filter.init.* Properties starting with this prefix are passed as init parameters to the Filter.init() method. The filter.init. prefix is removed from the parameter name.

Register Resources

It is also possible to register a service that informs the Http Whiteboard service about static resources like HTML files, images, CSS- or Javascript-files. For this a simple service can be registered that only needs to have the following two mandatory service properties set:

Service Property Description
osgi.http.whiteboard.resource.pattern The pattern(s) to be used to serve resources. As defined by the [4] Java Servlet 3.1 Specification in section 12.2, Specification of Mappings.This property marks the service as a resource service.
osgi.http.whiteboard.resource.prefix The prefix used to map a requested resource to the bundle’s entries. If the request’s path info is not null, it is appended to this prefix. The resulting
string is passed to the getResource(String) method of the associated Servlet Context Helper.

The service does not need to implement any specific interface or function. All required information is provided via the component properties.

To create a resource service follow these steps:

  • Create a folder resources in the project org.fipro.inverter.http
  • Add an image in that folder, e.g. eclipse_logo.png
  • PDE – Add the resources folder in the build.properties
  • Bndtools – Add the following line to the bnd.bnd file on the Source tab
    -includeresource: resources=resources
  • Create resource service
@Component(
    service = ResourceService.class,
    property = {
        "osgi.http.whiteboard.resource.pattern=/files/*",
        "osgi.http.whiteboard.resource.prefix=/resources"})
public class ResourceService { }

After starting the application the static resources located in the resources folder are available via the /files path in the URL, e.g. http://localhost:8080/files/eclipse_logo.png

Note:
While writing this blog post I came across a very nasty issue. Because I initially registered the servlet filter for the /* pattern, the simple header and footer where always added. This also caused setting the content type, that didn’t match the content type of the image of course. And so the static content was never shown correctly. So if you want to use servlet filters to add common headers and footers, you need to take care of the pattern so the servlet filter is not applied to static resources.

Register Servlet Listeners

It is also possible to register different servlet listeners as whiteboard services. The following listeners are supported according to the servlet specification:

  • ServletContextListener – Receive notifications when Servlet Contexts are initialized and destroyed.
  • ServletContextAttributeListener – Receive notifications for Servlet Context attribute changes.
  • ServletRequestListener – Receive notifications for servlet requests coming in and being destroyed.
  • ServletRequestAttributeListener – Receive notifications when servlet Request attributes change.
  • HttpSessionListener – Receive notifications when Http Sessions are created or destroyed.
  • HttpSessionAttributeListener – Receive notifications when Http Session attributes change.
  • HttpSessionIdListener – Receive notifications when Http Session ID changes.

There is only one component property needed to be set so the Http Whiteboard implementation is handling the listener.

Service Property Description
osgi.http.whiteboard.listener When set to true this listener service is handled by the Http Whiteboard implementation. When not set or set to false the service is ignored. Any other value is invalid.

The following example shows a simple ServletRequestListener that prints out the client address on the console for each request (borrowed from the OSGi Compendium Specification):

@Component(property = "osgi.http.whiteboard.listener=true")
public class SimpleServletRequestListener
    implements ServletRequestListener {

    public void requestInitialized(ServletRequestEvent sre) {
        System.out.println("Request initialized for client: "
            + sre.getServletRequest().getRemoteAddr());
    }

    public void requestDestroyed(ServletRequestEvent sre) {
        System.out.println("Request destroyed for client: "
            + sre.getServletRequest().getRemoteAddr());
    }

}

Servlet Context and Common Whiteboard Properties

The ServletContext is specified in the servlet specification and provided to the servlets at runtime by the container. By default there is one ServletContext and without additional information the servlets are registered to that default ServletContext via the Http Whiteboard implementation. This could lead to scenarios where different bundles provide servlets for the same request mapping. In that case the service.ranking will be inspected to decide which servlet should be delivered. If the servlets belong to different applications, it is possible to specify different contexts. This can be done by registering a custom ServletContextHelper as whiteboard service and associate the servlets to the corresponding context. The ServletContextHelper can be used to customize the behavior of the ServletContext (e.g. handle security, provide resources, …) and to support multiple web-applications via different context paths.

A custom ServletContextHelper it needs to be registered as service of type ServletContextHelper and needs to have the following two service properties set:

  • osgi.http.whiteboard.context.name
  • osgi.http.whiteboard.context.path
Service Property Description
osgi.http.whiteboard.context.name Name of the Servlet Context Helper. This name can be referred to by Whiteboard services via the osgi.http.whiteboard.context.select property. The syntax of the name is the same as the syntax for a Bundle Symbolic Name. The default Servlet Context Helper is named default. To override the
default, register a custom ServletContextHelper service with the name default. If multiple Servlet Context Helper services are registered with the same name, the one with the highest Service Ranking is used. In case of a tie, the service with the lowest service ID wins. In other words, the normal OSGi service ranking applies.
osgi.http.whiteboard.context.path Additional prefix to the context path for servlets. This property is mandatory. Valid characters are specified in IETF RFC 3986, section 3.3. The context path of the default Servlet Context Helper is /. A custom default Servlet Context Helper may use an alternative path.
context.init.* Properties starting with this prefix are provided as init parameters through the ServletContext.getInitParameter() and ServletContext.getInitParameterNames() methods. The context.init. prefix is removed from the parameter name.

The following example will register a ServletContextHelper for the context path /eclipse and will retrieve resources from http://www.eclipse.org. It is registered with BUNDLE service scope to ensure that every bundle gets its own instance, which is for example important to resolve resources from the correct bundle.

Note:
Create it in a new package org.fipro.inverter.http.eclipse within the org.fipro.inverter.http project, as we will need to create some additional resources to show how this example actually works.

@Component(
    service = ServletContextHelper.class,
    scope = ServiceScope.BUNDLE,
    property = {
        "osgi.http.whiteboard.context.name=eclipse",
        "osgi.http.whiteboard.context.path=/eclipse" })
public class EclipseServletContextHelper extends ServletContextHelper {

    public URL getResource(String name) {
        // remove the path from the name
        name = name.replace("/eclipse", "");
        try {
            return new URL("http://www.eclipse.org/" + name);
        } catch (MalformedURLException e) {
            return null;
        }
    }
}

Note:
With PDE remember to add org.osgi.service.http.context to the Imported Packages. With Bndtools remember to add the new package to the Private Packages in the bnd.bnd file on the Contents tab.

To associate servlets, servlet filter, resources and listeners to a ServletContextHelper, they share common service properties (see OSGi Compendium Specification Release 6 – Table 140.3) additional to the service specific properties:

Service Property Description
osgi.http.whiteboard.context.select An LDAP-style filter to select the associated ServletContextHelper service to use. Any service property of the Servlet Context Helper can be filtered on. If this property is missing the default Servlet Context Helper is used. For example, to select a Servlet Context Helper with name myCTX provide the following value:
(osgi.http.whiteboard.context.name=myCTX)To select all Servlet Context Helpers provide the following value:
(osgi.http.whiteboard.context.name=*)
osgi.http.whiteboard.target The value of this service property is an LDAP style filter expression to select the Http Whiteboard implementation(s) to handle this Whiteboard service. The LDAP filter is used to match HttpServiceRuntime services. Each Http Whiteboard implementation exposes exactly one HttpServiceRuntime service. This property is used to associate the Whiteboard service with the Http Whiteboard implementation that registered the HttpServiceRuntime service. If this property is not specified, all Http Whiteboard implementations can handle the service.

The following example will register a servlet only for the introduced /eclipse context:

@Component(
    service=Servlet.class,
    property= {
        "osgi.http.whiteboard.servlet.pattern=/image",
        "osgi.http.whiteboard.context.select=(osgi.http.whiteboard.context.name=eclipse)"
    },
    scope=ServiceScope.PROTOTYPE)
public class ImageServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        resp.setContentType("text/html");
        resp.getWriter().write("Show an image from www.eclipse.org");
        resp.getWriter().write(
            "<p><img src='img/nattable/images/FeatureScreenShot.png'/></p>");
    }

}

And to make this work in combination with the introduced ServletContextHelper we need to additionally register the resources for the /img context, which is also only assigned to the /eclipse context:

@Component(
    service = EclipseImageResourceService.class,
    property = {
        "osgi.http.whiteboard.resource.pattern=/img/*",
        "osgi.http.whiteboard.resource.prefix=/eclipse",
        "osgi.http.whiteboard.context.select=(osgi.http.whiteboard.context.name=eclipse)"})
public class EclipseImageResourceService { }

If you start the application and browse to http://localhost:8080/eclipse/image you will see an output from the servlet together with an image that is loaded from http://www.eclipse.org.

Note:
The component properties and predefined values are available via org.osgi.service.http.whiteboard.HttpWhiteboardConstants. So you don’t need to remember them all and can also retrieve some additional information about the properties via the corresponding Javadoc.

The sources for this tutorial are hosted on GitHub in the already existing projects:

 

Posted in Dirk Fauth, Eclipse, Java, OSGi | 5 Comments

OSGi Declarative Services news in Eclipse Oxygen

With this blog post I want to share my excitement about the OSGi DS related news that are coming with Eclipse Oxygen. I want to use this blog post to inform about the new features and also the changes you will face with these. With Oxygen M6 you can already have a look at those features and also provide feedback if you find any issues.

Note:
You don’t have to be a committer or contribute code to be part of an Open Source Community. Also testing new features and providing feedback is a very welcome contribution to a project. So feel free to participate in making the Eclipse Oxygen release even better than the previous releases!

DS 1.3 with Felix SCR

The Equinox team decided to drop Equinox DS (stuck with DS 1.2) and replace it with Felix SCR (Bug 501950). This brings DS 1.3 to Eclipse which was the last missing piece in the OSGi R6 compendium in Equinox.

It was already possible to exchange Equinox DS with Felix SCR with Neon, but now you don’t need to replace it yourself, it is directly part of Equinox. There are some important things to notice though, which I will list here:

Felix SCR bundle from Orbit

The Felix SCR bundle included in Equinox/Eclipse is not equal to the Felix SCR bundle from Apache. The Apache bundle imports and exports the org.osgi packages it requires, e.g. the component related interfaces like ComponentContext, ComponentFactory or ComponentServiceObjects. It also contains the Promises API used by Felix SCR, which was not available in Equinox before. The Felix SCR bundle in Orbit does not contain these packages. They are provided by other Equinox bundles, which are now required to use DS with Equinox.

Note:
If you are interested in some more information about the reasons for the changes to the Orbit Felix SCR bundle, have a look at Bug 496559 where Thomas Watson explained the reasons very nicely.

The bundles needed for DS in Equinox are now as follows:

  • org.apache.felix.scr
    The Declarative Services implementation.
  • org.eclipse.osgi.services
    Contains the required OSGi service interfaces.
  • org.eclipse.osgi.util
    Contains the Promises API and implementation required by Felix SCR.
  • org.eclipse.equinox.ds (optional)
    Wrapper bundle to start Felix SCR and provide backwards compatibility.

Adding the Promises API (see OSGi R6 Compendium Specification chapter 705) in Equinox is also a very nice, but worth its own blog post. So I will not go into more details here. The more interesting thing is that org.eclipse.equinox.ds is still available and in some scenarios required. It does not contain a DS implementation anymore. It is used as a wrapper bundle to start Felix SCR and provide backwards compatibility. The main reasons are:

  1. Auto-starting DS
    The Equinox startup policy is to start bundles only if a class is accessed from them, or if it is configured for auto-starting. As the SCR needs to be automatically started but actually no one really accesses a class from it, every Eclipse application that makes use of Declarative Services configured the auto-start of org.eclipse.equinox.ds in the Product Configuration. If that bundle would be simply replaced, every Eclipse based product would need to modify the Product Configuration.
  2. Behavioral Compatibility
    Equinox DS and Felix SCR behave differently in some cases. For example Felix SCR deactivates and destroys a component once the last consumer, that references the component instance, is done with it. Equinox DS on the other hand keeps the instance (I explained that in my Control OSGi DS Component Instances blog post). As p2 and probably also other implementations rely on the Equinox behavior that components are not deactivated and destroyed immediately, the property

    ds.delayed.keepInstances=true

    is set automatically by org.eclipse.equinox.ds.

Considering these changes it is also possible to remove org.eclipse.equinox.ds from an Eclipse Product Configuration and solely rely on org.apache.felix.scr. You just need to ensure org.apache.felix.scr is automatically started and ds.delayed.keepInstances is set to true (e.g. required when using p2 as described in Bug 510673.

DS Console Commands

If you want to inspect services via console, you need to know the new commands, as the old commands are not available anymore:

Equinox DS Felix SCR Description
list/ls
[bundle-id]
scr:list
[bundle-id]
List all components.
component|comp
<comp-id>
scr:info
<comp-id>
Print all component information.
enable|en
<comp-id>
scr:enable
<comp-name>
Enable a component.
disable|dis
<comp-id>
scr:disable
<comp-name>
Disable a component.
enableAll|enAll
[bundle-id]
- Enable all components.
disableAll|disAll
[bundle-id]
- Disable all components.

Despite some different command names and the fact that the short versions are not supported, you should notice the following:

  • The scope (scr:) is probably not needed in Equinox because there are by default no multiple commands with the same name. So only the command names after the colon can be used.
  • There are no equivalent commands to enable or disable all components at once.
  • To enable or disable a component you need to specify the name of the component, not the id that is shown by calling list before.

DS 1.3 Annotations Support in PDE

With Eclipse Neon the DS Annotations Support was added to PDE. Now Peter Nehrer (Twitter: @pnehrer) has contributed the support for DS 1.3 annotations. In the Preferences you will notice that you can specify which DS specification version you want to use. By default it is set to 1.3. The main idea is that it is possible to configure that only DS 1.2 annotations should be used in case you still need to develop on that specification level (e.g. for applications that run on Eclipse Neon).

The Preferences page also has another new setting “Add DS Annotations to classpath”, which is enabled by default. That setting will automatically add the necessary library to the classpath. While this is nice if you only implement a plain OSGi application, this will cause issues in case of Eclipse RCP applications that are build using Tycho. The JAR that is added to the classpath is located in the IDE, so the headless Tycho build is not aware of it! For Eclipse RCP development I therefore suggest to disable that setting and add org.osgi.service.component.annotations as an optional dependency to the Import-Package header as described in my Getting Started tutorial. At least if the bundles should be build with Tycho.

As a quick overview, with DS 1.3 the following modifications to the annotations are available:

  • Life cycle methods accept Component Property Types as parameter
  • Introduction of the Field Strategy which means @Reference can be used for field injection
  • Event methods can get the ComponentServiceObjects parameter type for PROTOTYPE scoped references, and there are multiple parameter type options for these methods
  • @Component#configurationPid
    multiple configuration PID values can be set and the value “$” can be used as placeholder for the name of the component
  • @Component#servicefactory
    deprecated and replaced by scope
  • @Component#reference
    specify Lookup Strategy references
  • @Component#scope
    specify the service scope of the component
  • @Reference#bind
    specify the name of the bind event method of a reference
  • @Reference#field
    name of the field, typically not specified manually
  • @Reference#fieldOption
    specify how field values should be managed
  • @Reference#scope
    specify the reference scope

Note:
For further information have a look at my previous blog posts where I explained these options in comparison to DS 1.2.

Although already in a very good shape, the DS 1.3 annotations are not finished 100% as of now. I already uncovered the following missing pieces:

  • Missing Require-Capability header in MANIFEST.MF (Bug 513216)
  • Missing Provide-Capability header in MANIFEST.MF (Bug 490063)
  • False error when using bind/updated/unbind parameter on field references (Bug 513462)

IMHO it would be also nice if the necessary p2.inf files are automatically created/updated to support p2 Capability Advice configurations, which is necessary because p2 still does not support OSGi capabilities.

As stated at the beginning, you could help with the implementation by testing and giving feedback on this implementation. It would be very helpful to have more people testing this, to have a stable implementation in the Oxygen release.

Thanks to Peter for adding that long waiting feature to PDE!

@Service Annotation for Eclipse RCP

Also for RCP development there are some news with regards to OSGi services. The @Service annotation, created by Tom Schindl for the e(fx)clipse project, has been ported to the Eclipse Platform (introduced here).

When using the default Eclipse 4 injection mechanisms, the injection of OSGi services is limited to a unary cardinality. Given an OSGi service of type StringInverter (see my previous tutorials) the injection can be done like this:

public class SamplePart {

    @Inject
    StringInverter inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}
public class SamplePart {

    @Inject
    @Optional
    StringInverter inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

This means:

  • Only a single service instance can get injected.
  • If the cardinality is MANDATORY (no @Optional), a service instance needs to be available, otherwise the injection fails with an exception.
  • If the cardinality is OPTIONAL (@Inject AND @Optional) and no service is available at creation time, a new service will get injected when it becomes available.

This behavior is similar to the DYNAMIC GREEDY policy for OSGi DS service references. But the default injection mechanism for OSGi services has several issues that are reported in Bug 413287.

  • If a service is injected and a new service becomes available, the new service will be injected, regardless of his service ranking. So even if the new service has a lower ranking it will be injected. Compared with the OSGi service specification this is incorrect as the service with the highest ranking should be used, or, if the ranking is equal, the service that was registered first .
  • If a service is injected and it becomes unavailable, there is no injection of a service with a lower service ranking. Instead null will be injected, even if a valid service is still available.
  • If a service implements multiple service interfaces, only the first service key is reset.
  • If a service instance should be created per bundle or per requestor by using either a service factory or scope, there will be only one instance for every request, because the service is always requested via BundleContext of one of the platform bundles.

Note:
I was able to provide a fix for the first three points. The last issue in the list regarding scoped services can not be solved for the default injection mechanism.

The @Service annotation was introduced to solve all these issues and additionally support the multiple cardinality (only MULTIPLE, not AT_LEAST_ONE).

To use it simply add @Service additionally to @Inject:

public class SamplePart {

    @Inject
    @Service
    StringInverter inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

The above snippet is similar to the Field Strategy in OSGi DS. To get something similar to the Event Strategy you would use method injection like in the following snippet:

public class SamplePart {

    StringInverter inverter;

    @Inject
    public void setInverter(@Service StringInverter inverter) {
        this.inverter = inverter;
    }
    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

With using the @Service annotation on a unary reference, you get a behavior similar to the DYNAMIC GREEDY policy for OSGi DS service references, which is actually the same as with the default injection mechanism after my fix is applied. Additionally the usage of a service factory or scoped services is supported by using the @Service annotation, as the BundleContext of the requestor is used to retrieve the service.

Note:
While writing this blog post there is an issue with the OPTIONAL cardinality in case no service is available at creation time. If a new service becomes available, it is not injected automatically. I created Bug 513563 for this and provided a fix for both, the Eclipse Platform and e(fx)clipse.

One interesting feature of the @Service annotation is the support of the MULTIPLE cardinality. This way it is possible to get all OSGi services of a specific type injected, in the same order as in the OSGi service registry. For this simply use the injection on a list of the desired service type.

public class SamplePart {

    @Inject
    @Service
    List<StringInverter> inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

Another nice feature (and also pretty new for e(fx)clipse) is the filter support. Tom introduced this here. e(fx)clipse supports static as well as dynamic filters that can change at runtime. Because of dependency issues only the support for static filters was ported to the Eclipse Platform. Via filterExpression type element it is possible to specify an LDAP filter to constrain the set of services that should be injected. This is similar to the target type element of OSGi DS service references.

public class SamplePart {

    // only get services injected that have specified the
    // value "online" for the component property "connection"
    @Inject
    @Service(filterExpression="(connection=online)")
    List<StringInverter> inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

With the @Service annotation the Eclipse injection for OSGi services aligns better with OSGi DS. And with the introduction of DS 1.3 to Equinox the usage of OSGi services for Eclipse RCP applications should become even more a common pattern than it was before with using the Equinox only Extension Points.

For me the news on OSGi DS in the Eclipse Platform are the most interesting ones in the Oxygen release. But of course not the only ones. So I encourage everyone to try out the newest Oxygen milestone releases to get the best out of it for everyone!

Posted in Dirk Fauth, Eclipse, Java, OSGi | 1 Comment

Control OSGi DS Component Instances via Configuration Admin

While trying to clean up the OSGi services in the Eclipse Platform Runtime I came across the fact that singleton service instances are not always feasible. For example the fact that the localization is done on application level does not work in the context of RAP, where every user can have a different localization.

In my last blog post I showed how to manage service instances with Declarative Services. In that scope I mainly showed the following scenarios:

  • one service instance per runtime
  • one service instance per bundle
  • one service instance per component/requestor
  • one service instance per request

For cases like the one in the RAP scenario, these four categories doesn’t match very well. We actually need something additionally like one service per session. But a session is nothing natural in the OSGi world. At least not as natural as it is in the context of a web application.

First I tried to find a solution using PROTOTYPE scoped services introduced with DS 1.3. But IMHO that approach doesn’t fit very well, as by default the services have bundle scope, unless the consumer specifies that a new instance is needed. Also the approach of creating service instances on demand by using a Factory Component or the DS 1.3 ComponentServiceObjects interface does not seem to be a good option in this case. The consumer is in charge of creating and destroying the instances, and he needs to be aware of that fact.

A session is mainly used to associate a set of states to someone (e.g. a user) over time. The localization setting of a user is a configuration value. And configurations for OSGi services are managed by the Configuration Admin. Having these things in mind and searching the web and digging through the OSGi Compendium Specification, I came across the Managed Service Factory and this blog post by Neil Bartlett (already quiet some years old).

To summarize the information in short, the idea is to create a new service instance per Component Configuration. So for every session a new Component Configuration needs to be created, which leads to the creation of a new Component Instance. Typically some unique identifier like the session ID needs to be added to the component properties, so it is possible to use filters based on that.

The Managed Service Factory description in the specification is hard to understand (at least for me), the tutorials that exist mainly focus on the usage without Declarative Services by implementing the corresponding interfaces, and the blog post by Neil unfortunately only covers half of the topic. Therefore I will try to explain how to create service instances for different configurations with a small example that is based on the previous tutorial.

The sources for this blog post can be found in my DS projects on GitHub:

Note:
I will try to bring in some Configuration Admin details at the corresponding places, but for more information in advance please have a look at my Configuring OSGi Declarative Services blog post.

Service Implementation

Let’s start by creating the service implementation. Implement the OneShot service interface and put it in the org.fipro.oneshot.provider bundle from the previous blog post.

@Component(
    configurationPid="org.fipro.oneshot.Borg",
    configurationPolicy=ConfigurationPolicy.REQUIRE)
public class Borg implements OneShot {

    @interface BorgConfig {
        String name() default "";
    }

    private static AtomicInteger instanceCounter =
            new AtomicInteger();

    private final int instanceNo;
    private String name;

    public Borg() {
        instanceNo = instanceCounter.incrementAndGet();
    }

    @Activate
    void activate(BorgConfig config) {
        this.name = config.name();
    }

    @Modified
    void modified(BorgConfig config) {
        this.name = config.name();
    }

    @Override
    public void shoot(String target) {
        System.out.println("Borg " + name
            + " #" + instanceNo + " of "+ instanceCounter.get()
            + " took orders and executed " + target);
    }

}

You should notice the following with that implementation:

  • We specify a configuration PID so it is not necessary to use the fully qualified class name later.
    Remember: the configuration PID defaults to the configured name, which defaults to the fully qualified class name of the component class.
  • We set the configuration policy REQUIRE, so the component will only be satisfied and therefore activated once a matching configuration object is set by the Configuration Admin.
  • We create the Component Property Type BorgConfig for type safe access to the Configuration Properties (DS 1.3).
  • We add life cycle methods for activate to initially consume and modified to be able to change the configuration at runtime.

Configuration Creation

The next thing is to create a configuration. For this we need to have a look at the ConfigurationAdmin API. In my Configuring OSGi Declarative Services blog post I only talked about ConfigurationAdmin#getConfiguration(String, String). This is used to get or create the configuration of  a singleton service. For the configuration policy REQUIRE this means that a single Managed Service is created once the Configuration object is used by a requesting bundle. In such a case the Configuration Properties will contain the property service.pid with the value of the configuration PID.

To create and handle multiple service instances via Component Configuration, a different API needs to be used. For creating new Configuration objects there is ConfigurationAdmin#createFactoryConfiguration(String, String). This way a Managed Service Factory will be registered by the requesting bundle, which allows to create multiple Component Instances with different configurations. In this case the Configuration Properties will contain the property service.factoryPid with the value of the configuration PID and the service.pid with a unique value.

As it is not possible to mix Managed Services and Managed Service Factories with the same PID, another method needs to be used to access existing configurations. For this ConfigurationAdmin#listConfigurations(String) can be used. The parameter can be a filter and the result will be an array of Configuration objects that match the filter. The filter needs to be an LDAP filter that can test any Configuration Properties, including service.pid and service.factoryPid. The following snippet for example will only return existing Configuration objects for the Borg service when it was created via Managed Service Factory.

this.configAdmin.listConfigurations(
    "(service.factoryPid=org.fipro.oneshot.Borg)")

The parameters of ConfigurationAdmin#getConfiguration(String, String) and ConfigurationAdmin#createFactoryConfiguration(String, String) are actually the same. The first parameter is the PID that needs to match the configuration PID of the component, the second is the location binding. It is best practice to use “?” as value for the location parameter.

Create the following console command in the org.fipro.oneshot.command bundle:

@Component(
    property= {
        "osgi.command.scope=fipro",
        "osgi.command.function=assimilate"},
    service=AssimilateCommand.class
)
public class AssimilateCommand {

    @Reference
    ConfigurationAdmin configAdmin;

    public void assimilate(String soldier) {
        assimilate(soldier, null);
    }

    public void assimilate(String soldier, String newName) {
        try {
            // filter to find the Borg created by the
            // Managed Service Factory with the given name
            String filter = "(&(name=" + soldier + ")"
                + "(service.factoryPid=org.fipro.oneshot.Borg))";
            Configuration[] configurations =
                this.configAdmin.listConfigurations(filter);

            if (configurations == null
                    || configurations.length == 0) {
                //create a new configuration
                Configuration config =
                    this.configAdmin.createFactoryConfiguration(
                        "org.fipro.oneshot.Borg", "?");
                Hashtable<String, Object> map = new Hashtable<>();
                if (newName == null) {
                    map.put("name", soldier);
                    System.out.println("Assimilated " + soldier);
                } else {
                    map.put("name", newName);
                    System.out.println("Assimilated " + soldier
                        + " and named it " + newName);
                }
                config.update(map);
            } else if (newName != null) {
                // update the existing configuration
                Configuration config = configurations[0];
                // it is guaranteed by listConfigurations() that
                // only Configuration objects are returned with
                // non-null properties
                Dictionary<String, Object> map =
                    config.getProperties();
                map.put("name", newName);
                config.update(map);
                System.out.println(soldier
                    + " already assimilated and renamed to "
                    + newName);
            }
        } catch (IOException | InvalidSyntaxException e1) {
            e1.printStackTrace();
        }
    }
}

In the above snippet name is used as the unique identifier for a created Component Instance. So the first thing is to check if there is already a Configuration object in the database for that name. This is done by using ConfigurationAdmin#listConfigurations(String) with an LDAP filter for the name and the Managed Service Factory with service.factoryPid=org.fipro.oneshot.Borg which is the value of the configuration PID we used for the Borg service component. If there is no configuration available for a Borg with the given name, a new Configuration object will be created, otherwise the existing one is updated.

Note:
To verify the Configuration Properties you could extend the activate method of the Borg implementation to show them on the console like in the following snippet:

@Activate
void activate(BorgConfig config, Map<String, Object> properties) {
    this.name = config.name();
    properties.forEach((k, v) -> {
        System.out.println(k+"="+v);
    });
}

Once a service instance is activated it should output all Configuration Properties, including the service.pid and service.factoryPid for the instance.

Note:
Some more information on that can be found in the enRoute documentation and of course in the specification.

Service Consumer

Finally we add the following execute command in the org.fipro.oneshot.command bundle to verify the instance creation:

@Component(
    property= {
        "osgi.command.scope=fipro",
        "osgi.command.function=execute"},
    service=ExecuteCommand.class
)
public class ExecuteCommand {

    @Reference(target="(service.factoryPid=org.fipro.oneshot.Borg)")
    private volatile List<OneShot> borgs;

    public void execute(String target) {
        for (ListIterator<OneShot> it =
            borgs.listIterator(borgs.size());
                it.hasPrevious(); ) {
                it.previous().shoot(target);
        }
    }
}

For simplicity we have a dynamic reference to all available OneShot service instances that have the service.factoryPid=org.fipro.oneshot.Borg. As a short reminder on the DS 1.3 field strategy: if the type is a Collection the cardinality is 0..n, and marking it volatile specifies it to be a dynamic reluctant reference.

Starting the application and executing some assimilate and execute commands will show something similar to the following on the console:

g! assimilate Lars
Assimilated Lars
g! assimilate Simon
Assimilated Simon
g! execute Dirk
Borg Lars #1 of 2 took orders and executed Dirk
Borg Simon #2 of 2 took orders and executed Dirk
g! assimilate Lars Locutus
Lars already assimilated and renamed to Locutus
g! execute Dirk
Borg Locutus #1 of 2 took orders and executed Dirk
Borg Simon #2 of 2 took orders and executed Dirk

The first two assimilate calls create new Borg service instances. This is verified by the execute command. The following assimilate call renames an existing Borg, so no new service instance is created.

Now that I have learned about Managed Service Factories and how to use them with DS, I hope I am able to adapt that in the Eclipse Platform. So stay tuned for further DS news!

Posted in Dirk Fauth, Eclipse, Java, OSGi | 1 Comment

Control OSGi DS Component Instances

I recently came across some use cases where a more fine grained control is needed for component instance creation. I spent some time in investigating how this is done with OSGi Declarative Services in detail. It turned out that it is easier as it seems, mainly because of missing or misleading tutorials. Therefore I decided to write a new blog post about that topic as part of my OSGi Declarative Service blog post series.

For the start you need to know that by default there is only one component configuration created and activated in the OSGi runtime at the same time. This means that every bundle is sharing the same component instance. So you have a singleton instance for every service. Note: singleton instance in terms of “one single instance” not “Singleton Pattern”!

If you think about multi-threading or context dependent services, you may need multiple instances of a service. In an OSGi environment there are basically the following categories:

  • one service instance per runtime
  • one service instance per bundle
  • one service instance per component/requestor
  • one service instance per request

Instance creation control can only be done for service components. So ensure to specify the service annotation type element in @Component if the implementation does not implement an interface.

To control the instance creation you can use the following mechanisms:

  • DS 1.2 – servicefactory annotation type element of @Component
  • DS 1.3 – scope annotation type element of @Component to specify the service scope
  • DS 1.2 / DS 1.3 – Create a factory component by using the factory annotation type element of @Component

Preparation

For some hands on this topic, we first create some bundles to play with.

Note:
I don’t want to explain every step for creating services in detail in this blog post. If you don’t know how to perform the necessary steps, please refer to my Getting Started with OSGi Declarative Services blog post.

  1. Create an API bundle org.fipro.oneshot.api with a service interface OneShot
    public interface OneShot {
    
        void shoot(String target);
    
    }
  2. Create a provider bundle org.fipro.oneshot.provider with a service implementation Hitman
    @Component
    public class Hitman implements OneShot {
    
        private static AtomicInteger instanceCounter =
                new AtomicInteger(); 
    
        private final int instanceNo;
    
        public Hitman() {
            instanceNo = instanceCounter.incrementAndGet();
        }
    
        @Override
        public void shoot(String target) {
            System.out.println("BAM! I am hitman #"
                + instanceNo + ". And I killed " + target);
        }
    
    }

    This implementation will count the number of instances in a static field and remembers it in a member variable, so we can identify the created instance when the service is called.

  3. Create a command bundle org.fipro.oneshot.command with a console command to call the service
    @Component(
        property= {
            "osgi.command.scope=fipro",
            "osgi.command.function=kill"},
        service=KillCommand.class
    )
    public class KillCommand {
    
        private OneShot killer;
    
        @Reference
        void setOneShot(OneShot oneShot) {
            this.killer = oneShot;
        }
    
        public void kill(String target) {
            killer.shoot(target);
        }
    }
  4. Create a command bundle org.fipro.oneshot.assassinate with two different console commands that call the service
    @Component(
        property= {
            "osgi.command.scope=fipro",
            "osgi.command.function=assassinate"},
        service=AssassinateCommand.class
    )
    public class AssassinateCommand {
    
        private OneShot hitman;
    
        @Reference
        void setOneShot(OneShot oneShot) {
            this.hitman = oneShot;
        }
    
        public void assassinate(String target) {
            hitman.shoot(target);
        }
    }
    @Component(
        property= {
            "osgi.command.scope=fipro",
            "osgi.command.function=eliminate"},
        service=EliminateCommand.class
    )
    public class EliminateCommand {
    
        private ComponentContext context;
        private ServiceReference<OneShot> sr;
    
        @Activate
        void activate(ComponentContext context) {
            this.context = context;
        }
    
        @Reference(name="hitman")
        void setOneShotReference(ServiceReference<OneShot> sr) {
            this.sr = sr;
        }
    
        public void eliminate(String target) {
            OneShot hitman =
                (OneShot) this.context.locateService("hitman", sr);
            hitman.shoot(target);
        }
    }

The EliminateCommand uses the Lookup Strategy to lazily activate the referenced component. In this example probably quite useless, but I wanted to show that this also works fine.

Note:
I am using the DS 1.2 notation here to make it easier to follow the example in both worlds. In the DS 1.3 only examples later in this blog post, you will see the modified version of the components using DS 1.3 annotations.

The sources for this blog post can be found on GitHub:

One instance per runtime

There is not much to say about this. This is the default behavior if you do not specify something else. There is only one component configuration created and activated. Therefore only one component instance is created and shared between all bundles.

In DS 1.2 a singleton instance can be explicitly configured on the component like this:

@Component(servicefactory=false)
public class Hitman implements OneShot {

In DS 1.3 a singleton instance can be explicitly configured on the component like this:

@Component(scope=ServiceScope.SINGLETON)
public class Hitman implements OneShot {

Note:
For Immediate Components and Factory Components it is not allowed to use other values for servicefactory or scope!

If you launch an OSGi application with the necessary bundles (org.apache.felix.scr, org.apache.felix.gogo.*, org.fipro.oneshot.*) and call the commands one after the other, you should get an output similar to this (on a Felix console):

g! kill Dirk
BAM! I am hitman #1. And I killed Dirk
g! assassinate Dirk
BAM! I am hitman #1. And I killed Dirk
g! eliminate Dirk
BAM! I am hitman #1. And I killed Dirk

Every command has a reference to the same Hitman instance, as can be seen by the instance counter in the output.

One instance per bundle

There are use cases where it is useful to have one component configuration created and activated per bundle. For example if the component configuration contains special bundle related configurations.

In DS 1.2 a bundle scope service can be configured on the component like this:

@Component(servicefactory=true)
public class Hitman implements OneShot {

In DS 1.3 a bundle scope service can be configured on the component like this:

@Component(scope=ServiceScope.BUNDLE)
public class Hitman implements OneShot {

When launching the OSGi application and calling the commands one after the other, you should get an output similar to this (on a Felix console):

g! kill Dirk
BAM! I am hitman #1. And I killed Dirk
g! assassinate Dirk
BAM! I am hitman #2. And I killed Dirk
g! eliminate Dirk
BAM! I am hitman #2. And I killed Dirk
g! kill Dirk
BAM! I am hitman #1. And I killed Dirk

You can see that the kill command has a reference to the Hitman instance #1, while the assassinate and the eliminate command both have a reference to the Hitman instance #2, as both reside in the same bundle.

One instance per requestor

There are some use cases where every consumer needs its own instance of a service. With DS 1.2 you could achieve this by creating a Factory Component. As this is basically the same as getting a service instance per request, I will explain the Factory Component in the following chapter. For now I will focus on the DS 1.3 variant to create and use a service instance per requestor.

In DS 1.3 the PROTOTYPE scope was introduced for this scenario.

@Component(scope=ServiceScope.PROTOTYPE)
public class Hitman implements OneShot {

Setting the scope of the service component to PROTOTYPE does not mean that every consumer gets a distinct service instance automatically. By default the result will be the same as with using the BUNDLE scope. So if you start the application with the updated Hitman service, you will get the same result as before.

The reason for this is the reference scope that was also introduced with DS 1.3. It is configured on the consumer side via @Reference and specifies how the service reference should be resolved. There are three possible values:

  • BUNDLE
    All component instances in a bundle will use the same service object. (default)
  • PROTOTYPE
    Every component instance in a bundle may use a distinct service object.
  • PROTOTYPE_REQUIRED
    Every component instance in a bundle must use a distinct service object.

As the default of the reference scope is BUNDLE, we see the same behavior for service scope PROTOTYPE as we saw for service scope BUNDLE. That means the consumer components need to be modified to achieve that every one gets its own service instance.

@Component(
    property= {
        "osgi.command.scope=fipro",
        "osgi.command.function=assassinate"},
    service=AssassinateCommand.class
)
public class AssassinateCommand {

    @Reference(scope=ReferenceScope.PROTOTYPE_REQUIRED)
    private OneShot hitman;

    public void assassinate(String target) {
        hitman.shoot(target);
    }
}
@Component(
    property= {
        "osgi.command.scope=fipro",
        "osgi.command.function=eliminate"},
    service=EliminateCommand.class,
    reference=@Reference(
            name="hitman",
            service=OneShot.class,
            scope=ReferenceScope.PROTOTYPE_REQUIRED
    )
)
public class EliminateCommand {

    private ComponentContext context;

    @Activate
    void activate(ComponentContext context) {
        this.context = context;
    }

    public void eliminate(String target) {
        OneShot hitman =
            (OneShot) this.context.locateService("hitman");
        hitman.shoot(target);
    }
}

Note:
The above examples are showing the DS 1.3 version of the command services. You should recognize the usage of the field strategy and the DS 1.3 lookup strategy, which makes the code more compact.

Note:
In the example I have chosen to use the reference scope PROTOTYPE_REQUIRED. In the given scenario also PROTOTYPE would be sufficient, as the concrete service implementation uses the PROTOTYPE service scope. But IMHO it is better to specify directly which reference scope to use, instead of having a weak rule.

When launching the OSGi application and calling the commands one after the other, you should get an output similar to this (on a Felix console):

g! kill Dirk
BAM! I am hitman #1. And I killed Dirk
g! assassinate Dirk
BAM! I am hitman #2. And I killed Dirk
g! eliminate Dirk
BAM! I am hitman #3. And I killed Dirk
g! kill Dirk
BAM! I am hitman #1. And I killed Dirk

You can see that every command gets its own service instance.

One instance per request

In some use cases it is required to have a distinct service instance per request. This is for example needed for web requests, where it is required that services are created and destroyed in every request, or for multi-threading where services can be executed in parallel (hopefully without side-effects).

With DS 1.2 a Factory Component needs to be used. With DS 1.3 again the PROTOTYPE scope helps in solving that requirement. In both cases some OSGi DS API needs to be used to create (and destroy) the service instances.

First lets have a look at the DS 1.3 approach using PROTOTYPE scoped services and the newly introduced ComponentServiceObjects interface. The implementation of the ComponentServiceObjects is a factory that allows to create and destroy service instances on demand. The following example shows the usage. Create it in the org.fipro.oneshot.command bundle.

@Component(
    property= {
        "osgi.command.scope=fipro",
        "osgi.command.function=terminate"},
    service=TerminateCommand.class
)
public class TerminateCommand {

    // get a factory for creating prototype scoped service instances
    @Reference(scope=ReferenceScope.PROTOTYPE_REQUIRED)
    private ComponentServiceObjects<OneShot> oneShotFactory;

    public void terminate(String target) {
        // create a new service instance
        OneShot oneShot = oneShotFactory.getService();
        try {
            oneShot.shoot(target);
        } finally {
            // destroy the service instance
            oneShotFactory.ungetService(oneShot);
        }
    }
}

Note:
There is no special modification needed in the component configuration of the provider. It simply needs to be configured with a PROTOTYPE service scope as shown before. The consumer needs to decide what instance should be referenced, the same as every service in the bundle, a new one for every component or a new one for each request.

Executing the terminate command multiple times will show that for each call a new Hitman instance is created. Mixing it with the previous commands will show that the other services keep a fixed instance, while terminate constantly will create and use a new instance per execution.

g! kill Dirk
BAM! I am hitman #1. And I killed Dirk
g! terminate Dirk
BAM! I am hitman #2. And I killed Dirk
g! terminate Dirk
BAM! I am hitman #3. And I killed Dirk
g! terminate Dirk
BAM! I am hitman #4. And I killed Dirk
g! kill Dirk
BAM! I am hitman #1. And I killed Dirk

Factory Component

With DS 1.2 you need to create a Factory Component to create a service instance per consumer or per request. The Factory Component is the third type of components specified by the OSGi Compendium Specification, next to the Immediate Component and the Delayed Component. It therefore also has its own lifecycle, which can be seen in the following diagram.

When the component configuration is satisfied, a ComponentFactory is registered. This can be used to activate a new component instance, which is destroyed once it is disposed or the component configuration is not satisfied anymore.

While this looks quite complicated on first sight, it is a lot easier when using DS annotations. You only need to specify the factory annotation type element on @Component. The following snippet shows this for a new OneShot implementation. For the exercise add it to the org.fipro.oneshot.provider bundle.

@Component(factory="fipro.oneshot.factory")
public class Shooter implements OneShot {

    private static AtomicInteger instanceCounter =
        new AtomicInteger(); 

    private final int instanceNo;

    public Shooter() {
        instanceNo = instanceCounter.incrementAndGet();
    }

    @Override
    public void shoot(String target) {
        System.out.println("PEW PEW! I am shooter #"
            + instanceNo + ". And I hit " + target);
    }

}

As explained above, the SCR will register a ComponentFactory that can be used to create and activate new component configurations on demand. On the consumer side this means it is not possible to get a Shooter service instance via @Reference, as it is not registered as a Delayed Component. You need to reference a ComponentFactory instance by specifying the correct target property. The target property needs to be specified for the key component.factory and the value of the factory annotation type element on the @Component annotation of the Factory Component.

The following snippet shows the consumer of a Factory Component. Create it in the org.fipro.oneshot.command bundle.

@Component(
    property= {
        "osgi.command.scope=fipro",
        "osgi.command.function=shoot"},
    service=ShootCommand.class
)
public class ShootCommand {

    @Reference(target = "(component.factory=fipro.oneshot.factory)")
    private ComponentFactory factory;

    public void shoot(String target) {
        // create a new service instance
        ComponentInstance instance = this.factory.newInstance(null);
        OneShot shooter = (OneShot) instance.getInstance();
        try {
            shooter.shoot(target);
        } finally {
            // destroy the service instance
            instance.dispose();
        }
    }
}

Comparing the Factory Component with the PROTOTYPE scoped service, the following differences can be seen:

  • A PROTOTYPE scoped service is a Delayed Component, while the Factory Component is a different component type with its own lifecycle.
  • A Factory Component can only be consumed by getting the ComponentFactory injected, while a PROTOTYPE scoped service can be created and consumed in different ways.
  • component configuration needs to be provided when creating the component instance via ComponentFactory. A PROTOTYPE scoped service can simply use the configuration mechanisms provided in combination with the Configuration Admin.
  • ComponentServiceObjects is type-safe. The result of ComponentInstance#getInstance() needs to be cast to the desired type.

Compared to creating the service instance by using the constructor, the nice thing on using a Factory Component or a PROTOTPYE scoped service is that the configured service references are resolved by the SCR. You could verify this for example by adding a reference to the StringInverter service from my previous blog post.

Note:
To create an instance per requestor by using a Factory Component, you would simply create the instance in the @Activate method, and dispose it on @Deactivate.

Component Instance cleanup

When Peter Kirschner (Twitter: @peterkir) and I prepared our tutorial for the EclipseCon Europe 2016, we noticed a runtime difference between Equinox DS and Felix SCR. In the Console Exercise we also talked about the lifecycle methods and wanted to show them. So we added the @Activate and the @Deactivate method to the StringInverterImpl and the StringInverterCommand. Running the example on Equinox and executing the console command showed a console output for activating the service and the command. But both never were deactivated. Running the example with Felix SCR the StringInverterCommand was activated, executed and deactivated right after the execution. We wondered about that different behavior, but were busy with other topics, so we didn’t search further for the cause.

Note:
The tutorial sources and slides can be found on GitHub.

I recently learned what causes this different behavior and how it can be adjusted.

For Delayed Components the OSGi Compendium Specification says:
If the service registered by a component configuration becomes unused because there are no more bundles using it, then SCR should deactivate that component configuration.

Should is a quite weak statement, so it is easy to have a different understanding of this part of the specification. Apache Felix SCR is taking that statement very serious and deactivates and destroys the component once the last consumer that references the component instance is done with it. Equinox DS on the other hand keeps the instance. At least this is the default behavior in those SCR implementations. But both can be configured via system properties to behave differently.

To configure Equinox DS to dispose component instances that are no longer used (like the Felix SCR default behavior), use the following JVM parameter (see Equinox DS Runtime Options):

-Dequinox.scr.dontDisposeInstances=false

To configure Felix SCR to keep component instances and not dispose them once they are no longer used (like the Equinox DS default behavior), use the following Framework property, e.g. by setting it as JVM parameter (see Felix SCR Configuration):

-Dds.delayed.keepInstances=true

To get some more insight on this you might also want to look at the ticket in the Felix bug tracker where this was discussed.

To experiment with that you can modify for example Hitman and KillCommand and add methods for @Activate and @Deactivate.

@Activate
void activate() {
    System.out.println(
        getClass().getSimpleName() + " activated");
}

@Deactivate
void deactivate() {
    System.out.println(
        getClass().getSimpleName() + " deactivated");
}

Add the JVM arguments for the runtime you are experimenting with and check the results.

Posted in Dirk Fauth, Eclipse, Java, OSGi | 2 Comments