Thursday, July 7, 2011

Concurrent Access to Models

Most people know that EMF models are inherently unsafe to access concurrently from multiple threads. It's immediately obvious when you look at the following code that has been generated with the standard JET templates.

Ed usually argues that it's the application's responsibility to control concurrent access to the model if it knows that multiple threads are involved. The application knows best how to do it efficiently for specific access patterns and ideally how to avoid deadlocks. Note that adding synchronized modifiers everywhere is counter productive. It wouldn't make the model completely thread safe and in addition unordered access would likely end up in deadlocks that are hard or impossible to resolve.

A model is basically nothing more than an object graph and how a particular thread navigates through such an object graph is highly specific to the particular application. As a result the most commonly implemented locking scope is the entire model. Only one thread at a time can access the model, all other threads must block on a single mutex:

The EMF Transaction project supports a protocol for clients to read and write EMF models on multiple threads but it has two major drawbacks:
  • It is very coarse grained because the locking scope is the entire model.
  • It is intrusive because each single access to the model must be wrapped.
A one-way road! What if we, instead of letting threads compete for the ability to access the model, hand a separate model copy to each thread. This is neither coarse grained nor intrusive because each thread can access all model elements at all times with normal application code; no wrapper commands are needed.

This approach obviously enables concurrent threads to access the (their) model at any time, but hey, isn't it extremely expensive to instanitate the entire model multiple times? Of course it is! So let's go further down this road and see what can be done to solve the foot print issues.

Let's assume that in the most common scenarios the models can be pretty big but a single transaction, i.e., the number of objects changed between two consecutive commits, is rather small. Then we could refactor our model classes to delegate all model state access to a new kind of entity that can now be shared among the model objects of all open transactions. Let's call these shared entities revisions and their managing container a session.

The model objects are now very cheap in terms of foot print because they only store a pointer to their current revision in addition to some general EMF infra structure such as the list of adapters. The revisions contain all the modeled state plus a version number (which is explained below).

Nice, now the model can be read by multiple threads without main memory being blown up. But with this design the original problem of concurrent write access is not addressed! The modifications that one thread applies to a model object end up in a shared revision, possibly overwriting changes made by other threads.

It's obvious that transaction scoped writes must not alter the shared state. So we refactor our model classes again so that the setters automatically create and link copies of the used shared revisions. Let's call them transactional revisions.

A simple implementation of a commit operation would execute these steps:
  1. The versions of all transactional revisions are checked against the versions of the current shared revisions to detect conflicting commits of other transactions.
  2. Move the transactional revisions into the session.
  3. Notify other transactions so that they can eventually adjust their revision pointers to the new shared revisions. Note that conflict potential in these other transactions can be detected early at this point in time!

That's it! Too simple?

It probably isn't that simple in many ways. But there's already a mature Eclipse technology available that cares for all of the aforementioned aspects and more.

Surprise, surprise, it's the CDO Model Repository, a highly efficient and scalable runtime platform for your models. The following code snippet illustrates how to use CDO to let 100 threads modify the same model:

You may have noticed that in the above example code the commit operation of a background thread can fail because the company object has just been modified by a different thread. With CDO you can easily implement a pessimistic locking strategy by acquiring a single explicit write lock on the company object. Alternatively you can register shipped or custom conflict resolvers with your transactions if you prefer to stay optimistic as long as possible.

Happy multi threading!


  1. Great post Eike!
    Probably the best clear description about the CDO Foundations!
    RCP Vision

  2. Thank you Vincenzo. You can't imagine how unusual it felt to only mention CDO in the last paragraph :P

  3. So basically you smartly solve at the library level what should be solved at the language level.

    Object-oriented and concurrency is just like salt and sugar: it does not feel good together.

    Once may want to get out of this giant graph of interconnected objects that we find in every OO software. The state of each object in the graph can be changed at anytime, thus making concurrency and reasoning a nightmare.

    Did you look at how Clojure solve this problem in two approaches? Firstly by removing such giant graph of interconnected objects by using functional approach, secondly by dissociating *state* and *identity*, thirdly by providing a statefull approach, when (rarely) needed, with STM.

  4. Awesome article Eike! It's really a major benefit to EMF to have people thinking so deeply about these issues. Ed and I had sort of a meandering ;) thread on this -- with me meandering the most -- on related topics a while back:

    I wanted to mention that the mutex approach works surprisingly well for single model desktop applications. You really just need to override access to the command stack. The tricky part is handling view notification but even that is pretty manageable.

  5. Of course the issue of reading the state of a large model and then modifying it based on the gathered information is always an issue too. E.g.,

    int i = list.indexOf(o); list.remove(i);

    isn't thread safe even if the list is.

  6. Kototama, salt and sugar can be a yummie combination. And useful, too: :P

    In fact the core designs of CDO stem from times when I was engaged with orthogonally persistent object systems. Please note that with CDO the giant object graph is just a thin but powerful illusion on top of a heap of unconnected revisions:

  7. Thank you Miles! And sure, the mutex approach can work well, but IMHO only for small problems and for those that are not highly parallelized.

  8. Good point, Ed! With CDO you basically have two options to achieve consistent reads:

    * Either you decouple your thread from the passive updates that originate from other threads and manually refresh() at times of your choice

    * Or you first acquire a set of read locks to prevent changes to your objects from happening in other threads at all.

    I plan to write a couple of other blogs in the near future to shed more light on these orthogonal aspects of EMF and CDO.

  9. Hello Eike,

    first of all, thank you by the fantastic work you've done!

    I have a question: I'm still new in this but I wonder if is there a way (maybe in newer versions?) to use an optimistic lock but still be able to invalidate a transaction when some value that was read during the transaction was changed in the meantime?
    I want to avoid to make changes on the current algorithms by putting read locks everywhere when going to CDO (at worst if we could lock the entire resource for reading, it would at least allow concurrency for read-only transactions).

    Many thanks.

  10. Hi silmarx,

    All CDOViews and CDOTransactions are invalidated by default when someone commits another transaction. You can listen to CDOViewInvalidationEvents or CDOSessionInvalidationEvents when CDOSession.options().isPassiveUpdateEnabled() is true (the default). I could also imagine to provide a hook for a single read access handler in a special thread local variable so that you can get aware of all objects that an algorithm consumes. With that new hook you could, for example, automatically acquire read locks on objects that your result depends upon. Please submit a bugzilla if you think that would be helpful.


    1. Hi Eike,

      I understand very well your suggestion about storing all objects that were accessed during the transaction.
      But about the invalidations, in the tests I did the transaction is not invalidated. Or at least the commit is done normally.
      For example:

      Thread 1:
      Book book = library.getBooks().get(0);
      String name = book.getAuthor().getName();
      * pause *

      Thread 2:
      library.getWriters().get(0).setName("new name"); //I only have 1 writer, so it's the same in boths threads
      * commit *

      Thread 1 (continues):
      //Writer writer = LibraryPackage.eINSTANCE.createWriter();
      writer.setName("copy of " + name);
      * commit *

      The Thread 1 which is the 2nd thread to finish, it still commits successfully. The only way to invalidate it is by making an explicit lock on the book.getAuthor() variable and have the passive updates disabled (to make sure the value was not reloaded already at this time).
      The CDOTransaction would only be invalidated if both threads make changes on same objects, right?

      I will do some more analysis and then submit a bugzilla entry for your suggestion of storing locally the objects that were read, if we find necessary.

      Many thanks.