No more tragedy of the commons for Platform UI

Remember than people spoke about the tragedy of the commons for the Eclipse platform?

I think it save to say that these times are over for Platform UI.

I think we see in Platform UI the hard work we put into cleaning up the code base and trying to onboard new committers and contributors.


Thanks to all committers and contributors on Platform UI!

Posted in Eclipse, Lars Vogel | Comments Off on No more tragedy of the commons for Platform UI

Using CompletableFuture in your Eclipse RCP application

If you want to update your RCP application asynchronously you can use Java 8 CompletableFutures. For example the following starts an CompletableFuture and uses the getData method to read the data and calls afterwards the updateTable method.

button.addSelectionListener(new SelectionAdapter() {
 public void widgetSelected(SelectionEvent e) {

 public List<String> getData() {

 // fake slow operation
 try {
 } catch (InterruptedException e) {
   List<String> myList = Arrays.asList("1", "2", "3", "4", "5");
   return myList;

 public void updateTable(List<String> list) {
   current.asyncExec(new Runnable() {
   public void run() {

I would be nice if CompletableFuture could run the thenAccept in the SWT Display thread similar to the schedule rules in RxJava for Android. but I have not found a way do do that. Suggestions are welcome. 🙂

Posted in Eclipse, Lars Vogel | Tagged | 7 Comments

Joining the Eclipse Project Management Committee

I’m honored to join the Eclipse Project Management Committee (PMC) for the Eclipse Top-Level Project. See
Eclipse project charter for their responsibilities.

As Eclipse committer and project lead for platform UI and e4 my main goals are:

  • Attract and win new contributors and committers
  • Improve stability and performance of the Eclipse IDE
  • Enhance the Eclipse RCP programming model

To help achieving this my main work items are:

  • Cleanup the Eclipse code base
    Update the Eclipse code to new framework versions and Java versions
  • Simplify and automate the committer workflow
  • Review Gerrits as much as possible
  • Coach potential new committers
  • Simplify and enhance the UI exercise and the platform API usage

I always felt that the existing PMC members did support this work. Joining them is a great honor and I can hopefully help to enhance the Eclipse IDE further.

Posted in Eclipse, Lars Vogel | Comments Off on Joining the Eclipse Project Management Committee

Run an Eclipse 32-bit application from a 64-bit Eclipse IDE

Typically the development environment should not be dependent on the target environment the application should run on. For creating an Eclipse RCP application using SWT, this is not as trivial as it looks like. The reason for this is the fact that the SWT implementation is packaged in platform dependent bundle fragments. But it is possible to setup the workspace to make that work, which I will show in the following blog post.

Use Case

You need to maintain an Eclipse RCP application that makes use of 32-bit Windows native libraries. Since you got a brand new laptop or PC that runs a 64-bit Windows, you install the 64-bit version of Eclipse. As you are aware that you need to execute the application in a 32-bit JVM, you add a 32-bit JDK via Window -> Preferences -> Java -> Installed JREs and configure that JDK as the default for the JavaSE-1.8 Execution Environment.


At development time you want to start your application from the IDE, e.g. via .product file -> Launch an Eclipse application. But you get the following error:

java.lang.UnsatisfiedLinkError: Cannot load 64-bit SWT libraries on 32-bit JVM


The reason for this is clear, you installed a 64-bit Eclipse, therefore you only have the 64-bit bundle fragment of SWT in your installation. But you need the 32-bit SWT fragment. This can be solved easily by configuring the target platform appropriately.

  • Create a new Target Platform
  • Switch to the Environment tab in the PDE Target Editor
  • Change the Architecture to x86
  • Switch to the Definition tab
  • Click Reload (this is important to retrieve the x86 fragments!)
  • Switch to the Content tab and check if the correct fragment is now part of the target platform (check for the org.eclipse.swt.win32.win32.x86 fragment)
  • Activate the target platform via Set as Target Platform

Now it is possible to execute a 32-bit application from a 64-bit Eclipse IDE via .product file -> Launch an Eclipse application.

Note: Remember to start via the .product file and not via an existing run configuration, because the run configuration needs to be updated for the environment settings.




Posted in Dirk Fauth, Eclipse | 2 Comments

OSGi – bundles / fragments / dependencies

In the last weeks I needed to look at several issues regarding OSGi dependencies in different products. A lot of these issues were IMHO related to wrong usage of OSGi bundle fragments. As I needed to search for various solutions, I will publish my results and my opinion on the usage of fragments in this post. Partly also for myself to remind me about it in the future.

What is a fragment?

As explained in the OSGi Wiki, a fragment is a bundle that makes its contents available to another bundle. And most importantly, a fragment and its host bundle share the same classloader.

Looking at this from a more abstract point of view, a fragment is an extension to an existing bundle. This might be a simplified statement. But considering this statement helped me solving several issues.

What are fragments used for?

I have seen a lot of different usage scenarios for fragments. Considering the above statement, some of them where wrong by design. But before explaining when not to use fragments, let’s look when they are the agent of choice. Basically fragments need to be used whenever a resource needs to be accessible by the classloader of the host bundle. There are several use cases for that, most of them rely on technologies and patterns that are based on standard Java. For example:

  • Add configuration files to a third-party-plugin
    e.g. provide the logging configuration (log4j.xml for the org.apache.log4j bundle)
  • Add new language files for a resource bundle
    e.g. a properties file for locale fr_FR that needs to be located next to the other properties files by specification
  • Add classes that need to be dynamically loaded by a framework
    e.g. provide a custom logging appender
  • Provide native code
    This can be done in several ways, but more on that shortly.

In short: fragments are used to customize a bundle

When are fragments the wrong agent of choice?

To explain this we will look at the different ways to provide native code as an example.

One way is to use the Bundle-NativeCode manifest header. This way the native code for all environments are packaged in the same bundle. So no fragments here, but sometimes not easy to setup. At least I struggled with this approach some years ago.

A more common approach is to use fragments. For every supported platform there is a corresponding fragment that contains the platform specific native library. The host bundle on the other side typically contains the Java code that loads the native library and provides the interface to access it (e.g. via JNI). This scenario is IMHO a good example for using fragments to provide native code. The fragment only extend the host bundle without exposing something public.

Another approach is the SWT approach. The difference to the above scenario is, that the host bundle org.eclipse.swt is an almost empty bundle that only contains the OSGi meta-information in the MANIFEST.MF. The native libraries aswell as the corresponding Java code is supplied via platform dependent fragments. Although SWT is often referred as reference for dealing with native libraries in OSGi, I think that approach is wrong.

To elaborate why I think the approach org.eclipse.swt is using is wrong, we will have a look at a small example.

  1. Create a host bundle in Eclipse via File -> New -> Plug-in Project and name it Ensure to not creating an Activator or anything else.
  2. Create a fragment for that host bundle via File -> New -> Other -> Plug-in Development -> Fragment Project and name it Specify the host bundle on the second wizard page.
  3. Create the package in the fragment project.
  4. Create the following simple class (yes, it has nothing to do with native code in fragments, but it also shows the issues).
    public class MyHelper {
    	public static void doSomething() {
    		System.out.println("do something");

So far, so good. Now let’s consume the helper class.

  1. Create a new bundle via File -> New -> Plug-in Project and name it org.fipro.consumer. This time let the wizard create an Activator.
  2. In Activator#start(BundleContext) try to call MyHelper#doSomething()

Now the fun begins. Of course MyHelper can not be resolved at this time. We first need to make the package consumable in OSGi. This can be done in the fragment or the host bundle. I personally tend to configure Export-Package in the bundle/fragment where the package is located. We therefore add the Export-Package manifest header to the fragment. To do this open the file Switch to the Runtime tab and click Add… to add the package

Note: As a fragment is an extension to a bundle, you can also specify the Export-Package header for in the host bundle org.eclipse.swt is configured this way. But notice that the fragment packages are not automatically resolved using the PDE Manifest Editor and you need to add the manifest header manually.

After that the package can be consumed by other bundles. Open the file org.fipro.consumer/META-INF/MANIFEST.MF and switch to the Dependencies tab. At this time it doesn’t matter if you use Required Plug-ins or Imported Packages. Although Import-Package should be always the preferred way, as we will see shortly.

Althought the manifest headers are configured correctly, the MyHelper class can not be resolved. The reason for this is PDE tooling. It needs additional information to construct proper class paths for building. This can be done by adding the following line to the manifest file of

Eclipse-ExtensibleAPI: true

After this additional header is added, the compilation errors are gone.

Note: This additional manifest header is not necessary and not used at runtime. At runtime a fragment is always allowed to add additional packages, classes and resources to the API of the host.

After the compilation errors are gone in our workspace and the application runs fine, let’s try to build it using Maven Tycho. I don’t want to walk through the whole process of setting up a Tycho build. So let’s simply assume you have a running Tycho build and include the three projects to that build. Using POM-less Tycho this simply means to add the three projects to the modules section of the build.

You can find further information on Tycho here:
Eclipse Tycho for building Eclipse Plug-ins and RCP applications
POM-less Tycho builds for structured environments

Running the build will fail because of a Compilation failure. The Activator class does not compile because the import cannot be resolved. Similar to PDE, Tycho is not aware of the build dependency to the fragment. This can be solved by adding an extra. entry to the of the org.fipro.consumer project.

extra.. = platform:/fragment/

See the Plug-in Development Environment Guide for further information about build configuration.

After that entry was added to the of the consumer bundle, also the Tycho build succeeds.

What is wrong with the above?

At first sight it is quite obvious what is wrong with the above solution. You need to configure the tooling at several places to make the compilation and the build work. These workarounds even introduce dependencies where there shouldn’t be any. In the above example this might be not a big issue, but think about platform dependent fragments. Do you really want to configure a build dependency to a win32.win32.x86 fragment on the consumer side?

The above scenario even introduces issues for installations with p2. Using the empty host with implementations in the fragments forces you to ensure that at least (or exactly) one fragment is installed together with the host. Which is another workaround in my opinion (see Bug 361901 for further information).

OSGi purists will say that the main issue is located in PDE tooling and Tycho, because the build dependencies are kept as close as possible to the runtime dependencies (see for example here). And using tools like Bndtools you don’t need these workarounds. And in first place I agree with that. But unfortunately it is not possible (or only hard to achieve) to use Bndtools for Eclipse application development. Mainly because in plain OSGi, Eclipse features, applications and products are not known. Therefore also the feature based update mechanism of p2 is not usable. But I don’t want to start the discussion PDE vs. Bndtools. That is worth another (series) of posts.

In my opinion the real issue in the above scenario, and therefore also in org.eclipse.swt, is the wrong usage of fragments. Why is there a host bundle that only contains the OSGi meta information? After thinking a while about this, I realized that the only reason can be laziness! Users want to use Require-Bundle instead of configuring the several needed Import-Package entries. IMHO this is the only reason that the org.eclipse.swt bundle with the multiple platform dependent fragments exists.

Let’s try to think about possible changes. Make every platform dependent fragment a bundle and configure the Export-Package manifest header for every bundle. That’s it on the provider side. If you wonder about the Eclipse-PlatformFilter manifest header, that works for bundles aswell as for fragments. So we don’t loose anything here. On the consumer side we need to ensure that Import-Package is used instead of Require-Bundle. This way we declare dependencies on the functionality, not the bundle where the functionality originated. That’s all! Using this approach, the workarounds mentioned above can be removed. PDE and Tycho are working as intended, as they can simply resolve bundle dependencies. I have to admit that I’m not sure about p2 regarding the platform dependent bundles. Would need to check this separately.


Having a look at the two initial statements about fragments

  • a fragment is an extension to an existing bundle
  • fragments are used to customize a bundle

it is IMHO wrong to make API public available from a fragment. These statements could even be modified to become the following:

  • a fragment is an optional extension to an existing bundle

Having that statement in mind, things are getting even clearer when thinking about fragments. Here is another example to strengthen my statement. Guess you have a host bundle that already exports a package Now you have a fragment that adds an additional public class via that package, and in a consumer bundle that class is used. Using Bndtools or the workarounds for PDE and Tycho showed above, this should compile and build fine. But what if the fragment is not deployed or started at runtime? Since there is no constraint for the consumer bundle that would identify the missing fragment, the consumer bundle would start. And you will get a ClassNotFoundException at runtime.

Personally I think that everytime a direct dependency to a fragment is introduced, there is something wrong.

There might be exceptions to that rule. One could be to create a custom logging appender that needs to be accessible in other places, e.g. for programmatically configurations. As the logging appender needs to be in the same classloader as the logging framework (e.g. org.apache.log4j), it needs to be provided via fragment. And to access it programmatically, a direct dependency to the fragment is needed. But honestly, even in such a case a direct dependency to the fragment can be avoided with a good module design. Such a design could be for example to make the appender an OSGi service. The service interface would be defined in a separate API bundle and the programmatic access would be implemented against the service interface. Therefore no direct dependency to the fragment would be necessary.

As I struggled several days with searching for solutions on fragment dependency issues, I hope this post can help others, solving such issues. Basically my solution is to get rid of all fragments that export API and make them either separate bundles or let them provide their API via services.

If someone with a deeper knowledge in OSGi ever comes by this post and has some comments or remarks about my statements, please let me know. I’m always happy to learn something new or getting new insights.

Posted in Dirk Fauth, Eclipse, OSGi | Comments Off on OSGi – bundles / fragments / dependencies

Substring code completion in Eclipse JDK

As of yesterday yesterday in the Eclipse 4.6. integration build, we offer substring code completion by default in Eclipse JDK.


This brings this feature known from the IntelliJ IDE and the Eclipse Code Recommenders project to JDT users and will help to continue to enhance the Java Development Tools in Eclipse.

This feature was original developed within a Google Summer of Code project by Gábor Kövesdán with Noopur Gupta and myself as mentors. After the project finished, the JDT team polished this development quite a bit and activated it yesterday, including improve highlighting in the code proposal.

You find the latest and greatest integration build (I20160112-1800) on if you want to try it out.

Posted in Eclipse, Lars Vogel | 5 Comments

POM-less Tycho builds for structured environments

With Tycho 0.24 POM-less Tycho builds where introduced. That approach uses convention-over-configuration to reduce the number of redundant information for setting up a Tycho build. In short, that means you don’t need to create and maintain pom.xml files for bundle, feature and test projects anymore, as the whole information can be extracted out of the already existing information in MANIFEST.MF or feature.xml.

Lars Vogel shows in his Tycho Tutorial a recommended folder structure, that is also widely used in Eclipse projects.

recommended_folder_structureThe meaning of that folder structure is:

  • bundles
    contains all plug-in projects
  • features
    contains all feature projects
  • products
  • contains all product projects
  • releng
    contains projects related to the release engineering, like

    • the project containing the parent POM
    • the aggregator project that contains the aggregator POM which defines the modules of a build and is also the starting point of a Tycho build
    • the target definition project that contains the target definition for the Tycho build
  • tests
    contains all test plug-in/fragment projects

This structure helps in organizing the project. But there is one convention for POM-less Tycho builds that is not working out-of-the-box with the given folder structure: “The parent pom for features and plugins must reside in the parent directory”. Knowing about the Maven mechanics, this convention can also be satisfied easily by introducing some POM files that simply connect to the real parent POM. I call them POM-less parent POM files. These POM-less parents need to be put into the base directories bundles, features and tests. And they do nothing else than specifying the real parent POM of the project (which is also located in a sub-directory of releng.

The following snippet shows a POM-less parent example for the bundles folder:

<project xmlns="" 




For the features and tests folder you simply need to modify the artifactId accordingly.

Note that you don’t need to reference the POM-less parent POM files in the modules section of the aggregator POM.

Following the best practices regarding the folder structures of a project and the conventions for POM-less Tycho builds, you will have at least 7 pom.xml files in your project.

  • parent POM
    the main build configuration
  • aggregator POM
    the collection of modules to build
  • target-definition POM
    the eclipse target definition build
  • product POM
    the eclipse product build
  • POM-less parent POM for bundles
    the connection to the real parent POM for POM-less plug-in builds
  • POM-less parent POM for features
    the connection to the real parent POM for POM-less feature builds
  • POM-less parent POM for tests
    the connection to the real parent POM for POM-less test plug-in builds

Of course there will be more if you provide a multi-product environment or if you need to customize the build of a plug-in for example.

With the necessary Maven extension descriptor for enabling the POM-less Tycho build (see POM-less Tycho builds), the folder structure will look similar to the following screenshot:


I hope this blog post will help people setting up Tycho builds for their products more easily using POM-less Tycho.


Posted in Dirk Fauth, Eclipse | 6 Comments

Make Retrofit ready for usage in OSGi

Retrofit is a really great library for addressing  REST APIs. It is often used for Android apps, because it is really lightweight and easy to use.

I’d also love to use this library for my Eclipse 4 RCP applications, so let’s make use of retrofit also here.

So download the retrofit artefacts and make use of it. But wait..! For Eclipse applications we need OSGi bundles rather than usual Java artefacts. When looking at the MANIFEST.MF file of the retrofit jar archive there isn’t any OSGi bundle meta data.

Fortunately there are many tools out there to convert plain Java artefacts into OSGi bundles, e.g., p2-maven-plugin(Maven) or bnd-platform(Gradle).

Since I am involved in the Buildship development (Gradle tooling for Eclipse) and we now also offer Gradle trainings besides our Maven trainings, I chose the bnd-platform plugin for Gradle.

The build.gradle file then looks like this:

buildscript {
	repositories {
	dependencies {
		classpath 'org.standardout:bnd-platform:1.2.0'

apply plugin: 'org.standardout.bnd-platform'

repositories {

platform {
	bundle 'com.squareup.retrofit:retrofit:2.0.0-beta2'

	bundle 'com.squareup.retrofit:converter-gson:2.0.0-beta2'

When Gradle has been setup properly, the desired bundles can be converted with the bundles task from the org.standardout.bnd-platform plugin:

/retrofit-osgi-convert$ ./gradlew bundles

By running the bundles task retrofit, a json converter, in this case GSON, and the transitive dependencies are available as converted OSGi bundles in the /retrofit-osgi-convert/build/plugins folder.

See Gradle tutorial for further information.

When adding these converted bundles to the target platform of a Eclipse RCP application it should usually work out of the box.

But …! After adding the retrofit and gson converter bundles as dependencies to my plugin’s MANIFEST.MF file I still get compile errors. 🙁

So what went wrong? Basically 2 things! The first problem is obvious, because when looking into the generated MANIFEST.MF meta data of retrofit there is an import for the android.os package. This import was added automatically during the conversion. The readme of the bnd-platform plugin explains how to configure the imports.

The second thing is that the retrofit and its converter bundles have split packages, which is fine for plain Java projects, but not for OSGi bundles. So the split package problem has also to be resolved. See

Fortunately this can also be configured in the build.gradle file:

platform {

	// Convert the retrofit artifact to OSGi, make android.os optional and handle the split package problems in OSGi
		bnd {
			optionalImport 'android.os'
			instruction 'Export-Package', 'retrofit;com.squareup.retrofit=split;mandatory:=com.squareup.retrofit, retrofit.http'

	// Convert the retrofit gson converter artifact to OSGi and handle the split package problems in OSGi
	bundle('com.squareup.retrofit:converter-gson:2.0.0-beta2') {
			instruction 'Require-Bundle', 'com.squareup.retrofit'
			instruction 'Export-Package', 'retrofit;com.squareup.retrofit.converter-gson=split;mandatory:=com.squareup.retrofit.converter-gson'

	// You can add other converters similar to the gson converter above...

The actual build.gradle file can be found on Github.

After resolving these problems, no compile errors are left. 🙂 But when running the application an java.lang.IllegalAccessError: tried to access class retrofit.Utils from class retrofit.GsonResponseBodyConverter error occured.

The cause for this are the modifiers used in the retrofit.Utils class, which itself is package private and also its closeQuietly method is package private. So even with these split package rules being applied this package private access rules prohibit the usage of the closeQuietly method from the converters bundles (gson, jackson etc.).

Now comes the part, why I love open source that much. I checked out the retrofit sources made some changes, build retrofit locally, tried my local OSGi version with my changes and finally provided a fix for this. See Thanks a lot @JakeWharton for merging my pull request that fast.

Retrofit and its GSON converter can already be obtained from bintray as p2 update site:

For further information and a complete example please refer

This repository contains the conversion script for making OSGi bundles from retrofit artifacts and  a sample application, which shows how to make use of retrofit in an Eclipse 4 RCP application. Just clone the repository in an Eclipse workspace, activate the target platform from the retrofit-osgi-target project and start the product in the de.simonscholz.retrofit.product project.

Retrofit and Eclipse 4

Retrofit and Eclipse 4

Feedback is highly appreciated.

Happy retrofitting in your OSGi applications 😉

Posted in Eclipse, Java, OSGi, Simon Scholz | Comments Off on Make Retrofit ready for usage in OSGi

Contributing to the Eclipse IDE – Second edition available as paper book and also as free download

I’m happy to announce that I finished the second edition of the Contributing to the Eclipse IDE book.


To help with the contribution process, I decided to release this edition also as free download.

Posted in Eclipse, Lars Vogel | Comments Off on Contributing to the Eclipse IDE – Second edition available as paper book and also as free download

Git statistics from Eclipse Platform.UI in September 2015

Last month our cleanup hero was Sopot! He delete the 2.0 compatibility layer plug-in.

Developers with the most changesets
Lars Vogel 42 (40.4%)
Markus Keller 11 (10.6%)
Simon Scholz 8 (7.7%)
Dani Megert 6 (5.8%)
Brian de Alwis 6 (5.8%)
Dirk Fauth 5 (4.8%)
Jonas Helming 5 (4.8%)
Stefan Xenos 4 (3.8%)
Sopot Cela 3 (2.9%)
Matthias Becker 3 (2.9%)
Alexander Kurtakov 3 (2.9%)
Patrik Suzzi 2 (1.9%)
Sergey Prigogin 1 (1.0%)
Andrey Loskutov 1 (1.0%)
Dariusz Stefanowicz 1 (1.0%)
Christian Radspieler 1 (1.0%)
Christian Georgi 1 (1.0%)
Daniel Haftstein 1 (1.0%)

Developers with the most changed lines
Sopot Cela 9226 (40.0%)
Lars Vogel 5510 (23.9%)
Stefan Xenos 3266 (14.1%)
Markus Keller 1728 (7.5%)
Dirk Fauth 1107 (4.8%)
Simon Scholz 861 (3.7%)
Jonas Helming 504 (2.2%)
Brian de Alwis 439 (1.9%)
Alexander Kurtakov 228 (1.0%)
Matthias Becker 139 (0.6%)
Patrik Suzzi 37 (0.2%)
Andrey Loskutov 20 (0.1%)
Dani Megert 9 (0.0%)
Sergey Prigogin 6 (0.0%)
Christian Georgi 6 (0.0%)
Daniel Haftstein 2 (0.0%)
Dariusz Stefanowicz 1 (0.0%)
Christian Radspieler 1 (0.0%)

Developers with the most lines removed
Sopot Cela 9213 (43.6%)
Lars Vogel 2244 (10.6%)
Dirk Fauth 352 (1.7%)
Markus Keller 201 (1.0%)
Simon Scholz 171 (0.8%)
Alexander Kurtakov 80 (0.4%)
Daniel Haftstein 1 (0.0%)

Posted in Eclipse, Lars Vogel | 2 Comments