Category Archives: Background

FHIR up your legacy data


At the FHIR DevDays we released Vonk FHIR Facade. A set of NuGet packages to FHIR-enable existing (legacy) systems. Read on or dive in.

What can I do with it?


Ever tried a different kind of bike? A recumbent, a canal bike or even a flying bicycle? From the cyclist’s perspective they’re all the same. That’s because they all have the same interface (pedals, steering wheel, something to sit on). But some of them need a little adjustment to their ‘surface’. That is where riding water or air is different from riding a road, and no single universal bike could cover all of them. Likewise, Vonk FHIR Facade empowers you to provide the same – FHIR – interface and just bridge the gap to different backend systems.

So whether you have that homegrown Access® system with valuable research data or that tailored Diabetes registration or this cool app platform that defines it’s own web services – Vonk FHIR Facade is meant to put a uniform API in front of each of them. And yes, that uniform API is obviously FHIR.

What is it?

The turn-key Vonk FHIR Server is built from several libraries. The FHIR RESTful API logic is built into Vonk.Core. It is agnostic to the actual storage. Instead it only communicates with an abstraction of the storage. We have already built three implementations of that abstraction ourselves. That way we are able to support very different types of storage, currently SQL Server, MongoDB and Memory.

Vonk FHIR Facade offers you the Vonk.Core library for all the FHIR functionality, and allows you to provide an implementation of the storage abstraction that fits your existing repository. That can be a database (relational or otherwise), other web services or even flat files, but that might hurt performance 🙂

Do I have to program?

Yes. Building a Facade with Vonk means building an ASP.NET (Core) web application that uses the provided NuGet packages.

The .NET programming languages give you a lot of freedom to express whatever you need to access the backend system. This power is hard to express in a declarative approach. On top of that, it allows for optimal performance.

That being said, we strive to develop methods to define a Facade for users that are less familiar with (.NET) programming. In order to get that right, we will first collect best practices from the early adopters and look for emerging patterns.

Side wheels for relational databases

Side wheels for relational databases

We expect access to relational databases to be the most common schema. And we already implemented it three times ourselves. So we provided a base implementation for doing exactly that, based on .NET Entity Framework Core: Vonk.Facade.Relational.

How does it work?

Here we have to be a bit technical – it’s about programming right? Implementing the storage abstraction involves two main components:

  1. Mapping data from the entities in your backend / database to FHIR Resources
    (And vice-versa if you accept creates or updates.)
  2. Translating pieces of the FHIR search mechanism to queries / web calls in your backend system.

Mapping data

This is pretty straightforward. Given that you have one or more related entities in your source system, you create a new instance of the target resourcetype and fill in the blanks:

public IResource MapPatient(ViSiPatient source)
  var patient = new Patient();
  patient.Id = source.Id.ToString();
  patient.Identifier.Add(new Identifier("", source.PatientNumber));
  patient.Name.Add(new HumanName().WithGiven(source.FirstName).AndFamily(source.FamilyName));
  patient.BirthDate = new FhirDateTime(source.DateOfBirth).ToString();
  patient.Telecom.Add(new ContactPoint(ContactPoint.ContactPointSystem.Email, ContactPoint.ContactPointUse.Home, source.EmailAddress));
  return patient.AsIResource();

You can also put data in extensions if a custom profile requires you to do so.

Before you program the mapping, you will have to design it first. Unless data is required on one side that is absent on the other, this will not take too much time.

Building queries

Searching in FHIR can become pretty complex. Straightforwardly searching for a string may already involve inspecting multiple columns in your database (e.g. the names of a patient), and then there are several other types of searchparameters like token, quantity or reference and modifiers on top of that.

What Vonk will do for you is analyzing the search url (or body, POST is also allowed for search) and break it up to pieces. It will validate all the parts of it:

  • Is the searchparameter defined on the requested resourcetype (also within a chain)?
  • Is the modifier valid for this type of parameter?
  • Is the argument in a valid format?
  • Can the quantity be canonicalized?

After that, it will present you a parametername and a value that contains the details according to the type of searchparameter. This gives you all the information to essentially build a clause of the where statement.

public override PatientQuery AddValueFilter(string parameterName, ReferenceFromValue value)
  if (parameterName == "subject" && value.Source == "Observation")
    var obsQuery = value.CreateQuery(new BPQueryFactory(OnContext));
    var obsIds = obsQuery.Execute(OnContext).Select(bp => bp.PatientId);

    return PredicateQuery(p => obsIds.Contains(p.Id));
  return base.AddValueFilter(parameterName, value);

The example above contains a non-trivial type of search: it is the result of a reverse chain on a reference search parameter: e.g. Patient?_has:Observation:subject.code=abc. Vonk disassembles that search url and makes sure you only need to worry about one thing at a time. If you would also implement the Observation.code parameter, this whole search would work.

The BPQueryFactory is in this example the factory that creates where clauses on the imaginary BloodPressure table, yielding Observation resources. This is the class where you would implement the Observation.code parameter in (based on a TokenValue).

A bonus is that if you implement the parameter as done above, include functionality comes for free: Patient?_revinclude=Observation:subject.

This implementation utilizes an EFCore DbContext and LINQ to express the filter. But the repository interfaces also allow you to express in terms of other query mechanisms, e.g. an http call.

Providing a FHIR response

This is where Vonk takes over again. You just return a list of resources based on the data returned from the composed query and the mapping. Vonk will take care of bundling, paging links and serialization to the requested format.

If you have not implemented one of the used searchparameters, Vonk will report that in an OperationOutcome. It will also automatically handle other headers for you such as the Prefer header on create and update.


This release of Vonk FHIR Facade supports FHIR STU3. Expect support for R4 shortly after that has been released.

Get on your bike!

We prepared an elaborate example for reading and searching Patient and Observation resources on a fictitious legacy system. You can find it on the Vonk documentation site. This is accompanied by code (of which the examples above are an excerpt) on Github.

The Facade code is meant to be ready for use in production scenario’s. Nevertheless this is the first release, so expect to see some changes to the interfaces in the near future. To smoothen your implementation and any changes afterwards we would like to keep in touch about your development.

Of course we’re very eager to provide support and get feedback from your experiences. Contact us on


Currently the pricing of all editions of Vonk – Server, Facade, Components – is the same. Please read the product page for details on the Launching Customer Program.

Versioning and canonical urls


At last week’s FHIR Developer Days in Amsterdam, we had a highly enjoyable break-out session on the use of canonical urls when taking versioning into consideration.

The issue had been popping up more often recently, and we, as the core team, had been pushing off trying to solve it until we had better understanding of the problem. But here we were: we had a room full of knowledgeable people and enough experience to take a stab at a solution.

For those not present, I’d like to summarize what we discussed, seamlessly turning into what I think needs to happen next.

The problem

Let me start off by giving you an overview of the problem.
The two key players in the game are the canonical url and the canonical reference. The canonical url is a special identity present on most of our conformance resources (StructureDefinition, ValueSet and the like). This identity is used by a canonical reference, present in StructureDefinition (e.g. StructureDefinition.base) and FHIR instances (Resource.meta.profile), which is used to refer to other StructureDefinitions (or any other conformance resource). For example, the StructureDefinition for the localized version of Address in Germany starts like this:

    <url value="" />
    <name value="address-de-basis" />
    <title value="Adresse, deutsches Basisprofil" />

Here, the <url> contains the canonical url of the address-de-basis StructureDefinition.

This canonical url can be referenced by other parts of your specification, as for example here at Patient.address in the StructureDefinition of the German version of Patient:

    <path value="Patient.address" />
    <short value="Adresse nach deutschem Profil" />
        <code value="Address" />
        <profile value="" />

This is quite comparable to what happens in your favorite programming language, and then referring to it when you are declaring an instance variable of class member:

public class GermanAddress

public class GermanPatient
    public GermanAddress Address { get; }

As a good member of the FHIR community, you have published all your profiles to a registry (these examples come from the German Basisprofil on Simplifier), and people are happily validating their Patient instances against it. Some of the instances may even have gotten tagged by a profile claim:

            <profile value="">
        <!-- rest of the Patient's data -->

Before long, this canonical reference gets baked into both instances in databases and software using your profile.

And then, the day comes that you are required to make changes to your profile. Breaking changes in fact. You are circling your brand new version of the profile amongst your colleagues, everyone is updating their test-instances and all works fine. Until you publish the new version on the registry. Once a copy of your StructureDefinition starts tickling down to servers around the country, previously correct instances of the German Patient will now start to fail validation. You realize you have broken the contract you had with your users: stuff that was valid before has now -without the end-users doing anything- become broken.

More subtly, if your breaking change was to the address-de-basis profile, this turns out to be a breaking change to all profiles depending on address-de-basis, which then proliferate up all the way to the instances.

A simple solution

It is easy to see what you should have done: you (and all your users) should have versioned both the canonical url and the canonical reference! So,

    <url value="" />


    <code value="Address" />
    <profile value="" />

and finally

    <profile value="">

This way, when we publish a new version we may change the canonical url and update it to end in v2, and all existing materials will remain untouched and valid.
Of course, minor changes are OK, so if we all agree to stick to semver, and only change our canonical url when a major version number changes, we’re doing fine. We formalized this approach in the STU3 version of the FHIR specification, by adding a <version> element to StructureDefinition:

    <url value="" />
    <version value="0.1" />

and then allowing you to tag canonical references with a | and a version number like so:

    <code value="Address" />
    <profile value="|0.1" />

Ok. Done. We published STU3 and hoped the problem was solved.

But it wasn’t. Well, technically, it was – but there are a few practical complications:

  • We had not followed this practice ourselves in the specification (leaving untouched for years across STUs), and neither did the most prominent early Implementation Guides (like those from Argonaut). We set bad examples that turned out to be the path of least resistance as well. Guess what happens.
  • It just feels wrong to hard-wire version numbers in references within a set of StructureDefinitions and Implementation Guides you author. If you publish them as a version-managed, coherent set of StructureDefinitions, it is obvious that you’d like to reference the version of the StructureDefinition published at the same time as the referring StructureDefinition in that same set.
  • If you need to bump the version in a canonical url in the set that you publish, you need to be really sure you update all references to it in that set. And then (as we saw above) update the canonical url of all referring StructureDefinitions, and so on and so on. If you fail to do this, you end up in a situation where part of your definitions are using one version and another part another version. Granted, we could find someone who thinks that’s a feature, but I am sure most would disagree.

Better tooling support for authors could ease this job and help sticking to versioned references, but I kept having the nagging feeling something was not right. This was strengthened by the fact that this is not how we commonly version in other development environments:
continuing our parrallel with programming concepts, versioning the canonical url would be comparable to versioning our class names:

public class GermanAddress2 { ... }
public class GermanName4 { ... }

public class GermanPatient2
    public GermanAddress2 Address { get; }
    public GermanName4 Name { get; }

It is not like we’ve never seen this before, but that’s really only done if you want to keep incompatible versions of the same class within the same codebase, because you still need access to both (e.g. to do mapping).

At the same time, we were looking at how users of Simplifier and authors of implementation guides organized their projects and how they wanted versioning to work. It turns out that StructureDefinitions simply do not live on their own, much like Java classes are not shared and distributed in isolation. They are authored and shipped in sets (like implementation guides), and are versioned in those sets. Of course, they may still use canonical references to materials outside the set, and you’d need to tightly version those references, but inside their set, they simply mean to point to each other.

You don’t need to look around long at how other parts of the industry have solved this to realize that we need the concept of a “package”, much like you would package up your classes and Javascript files into zips, jars or whatever and ship them as npm, maven or NuGet packages.

Packages to the rescue

If you are not familiar with these packaging mechanism, I’ll call out a few properties of packages, to see how they solve our problems:

  • Packages contain artifacts that are managed by a small group of authors, who prepare them as a consistent set and publish them as an indivisible unit under the same version moniker.
  • A package is published on a registry and has a name and a version, the combination of which is enforced to be unique within that registry. You cannot overwrite a package once it has become published.
  • Packages explicitly state on which version of which other packages they depend, and contain configuration information on how to handle version upgrades and mismatches. Additionally, the user of a packages may override how version dependencies are resolved, even enforcing the use of an older or newer version of the dependency when desired.

Packages are usually just normal zip archives with all artifacts you like to publish, with an additional “configuration file” added to the zip:

  "name": "",
  "version": 1.0.4,
    "": ">3.1.4"

This has considerable advantages for the users:

  • When a user downloads a package, it contains “everything you need” to start working with StructureDefinitions within it, instead of having to download individual artifacts one by one from a registry.
  • The user also has the confidence that these artifacts are meant to be used together, which is not obvious if you are looking at a huge list of StructureDefinition on the registry.
  • A package manager will ensure that if you download one package for use in your authoring environment or machine, all dependencies will also be retrieved, and you would not encounter errors due to missing references.

As an author of a package of conformance resources this means that you no longer need to version your canonical references: they are interpreted to mean to refer to the resources inside your package. This is even true for canonical references to StructureDefinitions and ValueSets outside your package, since you as an author explicitly declare these dependencies and version them at the package level and no longer at each individual reference. Upgrading to a new version of a dependency is now completely trivial.

It might even solve another long standing wish expressed by authors: you could configure “type aliases” within your package, stating that every reference to a certain core type (or profile) within the package is translated to a reference to another type. If you wonder why that is useful, consider the poor German authors who defined a German Address (address-de-basis) and then needed to go over all of their StructureDefinitions to replace references to the “standard” core Address with their localized version. It’s pretty comparable to redirecting conformance references to a specific version of a StructureDefinition, so we can solve this in one go.

Concrete steps

Looking at the situation today, I suggest we do the following:

  • Remove <version> from StructureDefinition, and obsolete the use of | in references (though hard version references might still have a use).
  • Decide what we need as a syntax to define packages and declare dependencies. We could leverage existing mechanisms (package.json being a prime candidate) or integrate it into the existing ImplementationGuide resource (which would enable existing registries to simply use a FHIR endpoint to list the packages).
  • Enhance the authoring workflow in the current tools to allow the authors to create packages when they want to publish their IGs, fixing the external dependencies to specific versions.
  • Create (or re-use) package managers to enable our users to download packages and their dependencies (and turn make our registry compatible with those tools).

Are we done? No. There are two more design decisions (at least) to make. Let’s start with the easiest one:

It is apparent that there is a relationship to our concept of an “Implementation Guide” and a package. And we have to figure out exactly what that relationship is. I feel an Implementation Guide is a kind of package, containing the conformance resources, html files, images, etcetera that form the IG. But this also means there will be packages that are not ImplementationGuides. If we decide that the ImplementationGuide resource is the home of the “configuration part” of a package, we will need to rename ImplementationGuide to reflect its new scope.

But I saved the most impactful consequence for last:

You can no longer interpret canonical references present in conformance resources or instances outside of the context of a package.

Let me reiterate that: any authoring tool, validator, renderer or whatever system that uses StructureDefinitions to do its work will need to know this context. Those who have carefully studied the current ImplementationGuide resource (especially realize this is already the case now, but most are blissfully unaware of this hidden feature.

For systems working with conformance resources (like instance validators), it’s likely they have this context: if you’re dealing with a StructureDefinition, you probably got it from a package in the first place (it becomes a different matter entirely if a resource can be distributed in different packages. Well, let’s not diverge.)

For servers exchanging instances however – we’d need to assume they know the context out of band. But this won’t do for partners exchanging outside of such a controlled environment. For this let me suggest to summon an old friend, hidden in a dark corner of the FHIR specification: Resource.implicitRules

The specification states:

Resource.implicitRules is a reference to a set of rules that were followed when the resource was constructed, and which must be understood when processing the content

that sounds about right, however, it continues:

Wherever possible, implementers and/or specification writers should avoid using this element.

And sparingly lists the reasons why we shouldn’t. I suggest we take a fresh look at this element and see whether the element we thought up so many years ago may find use for it after all.


We’re not done yet, and I admit I am not sure I’ve dealt with all the consequences of this versioning solution here, but it has one thing going for it: we’re profiting from solutions thought up by the rest of the industry who have dealt with this before. But is an exchange standard and it’s artifacts completely comparable to a software package? Will we be able to fix the problem of communicating package context?

I don’t know yet, but the more I think about the uses of packages and how it eases the life of authors and users of implementation guides, the more I think we should pursue this a bit in online discussions. Hope they are as engaging as the one at FHIR DevDays!

Tune in to our new Profiling Academy and become a profiling expert yourself!

By Lilian Minne – At the DevDays 2017, we launched our new product: the Profiling Academy. The Profiling Academy is available for all Simplifier users (free to join) and is meant for anyone willing to learn more about FHIR profiling. We aim to share our knowledge in a way that is easy to digest for all levels of FHIR users: beginner, moderate and advanced.

The Profiling Academy offers short, digestible modules covering one topic each. Each module offers reading material, real-life examples and exercises.  At this moment the following modules are available: Start Profiling, Extensions, Slicing and Best-Practices. More modules will be added in the near future. If you want to follow a module, just click on its name or use the menu in the upper navigation bar.

Profiling academy

Don’t forget to visit the other pages as well:

  • Feature movies: Watch interesting movies explaining some of the features of our products.
  • Helpful links: This page contains useful links when working with FHIR.
  • Meet our team: Get to know our Profiling Team members who are happy to introduce themselves.
  • About FHI: Learn more about our company and how to get in touch.

We are curious to know how you feel about the Profiling Academy. As the Profiling Academy was built using the IG-editor in Simplifier (pretty cool, right?), please leave your comments in the Issue Tracker of the project.

Make your first FHIR client in R – within one hour!


R on FHIR is the latest addition to our FHIR tool suite. It is available on The Comprehensive R Archive Network (CRAN). R on FHIR is, as you can tell from the name, an R package that supports R users with fetching data from FHIR servers. It provides simple, but powerful tools to perform read, version read and search interactions on FHIR servers and fetch the resulting resources in an R friendly format.

R on FHIR consists of two classes called fhirClient and searchParams, just like our .NET API. The fhirClient provides functions to perform read and search operations and to use the FHIR paging mechanism to navigate around a series of paged result Bundles. The searchParams class provides a set of fluent calls to allow you to easily construct more complex queries. The searchParams class falls outside the scope of this blog, but you can read the documentation on R on FHIR  to learn more about how to use this class. The documentation also includes some examples for both classes.

Step 0 – Install R (and RStudio)

Before we start, we need to make sure we have R with preferably RStudio installed, on our machine. See the links below. For this blog I used R version 3.2 (higher versions will also work, lower I think too, but I did not test that) and RStudio version 1.0.153.

Step 1 – Install and load R on FHIR

FHIR up (first and last pun, I promise) your RGui or RStudio and execute the following  statement in your console:

# This will install R on FHIR from The Comprehensive R Archive Network.
> install.packages("RonFHIR")

Step 2 – Create a new R script

In this blog we are going to write a function which returns an overview of a population based on a postal code. Go to File > New File > R Script or hit Ctrl+Shift+N in RStudio and for RGUI File > New script. Now that we have an empty script we can start writing our function called ‘getAreaInfo’ with the postal code as our parameter. On the first line of our function we create an instance of a fhirClient. Your script should look like this:

# This function load the R on FHIR package from the library 
# where R stores its packages.

getAreaInfo <- function(postalCode){
 client <- fhirClient$new("")

Step 3 – Searching Patients on a FHIR server

Now that we initialized a client we can start performing a search operation on a server. The most basic search is the fhirClient’s $search. It searches all resources of a specific type based on zero or more criteria. The $search function will return a Bundle as a list containing the found ResourceEntries as a data.frame.  Normally, any FHIR server will limit the number of search results returned. To obtain all results we can use $continue which uses the FHIR paging mechanism to navigate around a series of paged result Bundles. In our script below we used the $search and $continue functions to find all Patients living within the given postal code and return how the genders are distributed.


getAreaInfo <- function(postalCode){
 client <- fhirClient$new("")
 postalQuery <- paste("address-postalcode=", postalCode, sep = "")
 bundle <- client$search("Patient", c(postalQuery, "address-use=home"))
 genders <- c()
 genders <- c(genders, bundle$entry$resource$gender)
 bundle <- client$continue(bundle)

Now that our function is complete we can call it from our console, but first we have to tell R to source our code. This can be done with Code > Source or Ctrl+Shift+S in RStudio and File > Source R code… in RGUI. Now we can run our ‘getAreaInfo’ function in our R console.

> getAreaInfo(3999)
female   male 
     1    574

Congratulations! You just created your first function using a FHIR client in R to track the gender distribution in a given postal area.


This blog only showed a quick example of how you can use R on FHIR. For more details and examples you can always consult the documentation. Feel free to give feedback and join the R on FHIR development at our GitHub page.

We are planning to do a birds of a feather session during the HL7 FHIR DevDays here in Amsterdam. Hope to see you there.

Vonk 0.3: subscriptions and more


Recently we released an upgrade of the enterprise FHIR Server named Vonk. This release 0.3 contains features like custom search parameters, subscriptions and an Administration API. For a complete overview of new features, see the release notes. You can try for yourself at or download a trial version on

Remember my bike?

It is strong, durable and simple. But if you are going to take a long ride, you need stuff. Hot coffee, raincoat and of course binoculars to check all the birds. Thaortlieb-back-roller-black-n-white-pannier-pair-white-black-EV229664-9085-9.jpgt means hanging a back roller to it. And that is what we did with Vonk as well: add a back roller full of new features.

Protection from the elements

What a raincoat is to a cyclist, validation can be to Vonk. Just bare validation was already possible (and still is, on /<resourcetype>/$validate), but now you can also protect your data by validating of all resources that are created or updated. And if they fail validation, they are rejected.

That is nice, but you probably have made your own custom profiles (in the form of StructureDefinitions). Of course hosted on 🙂 Previously you had to send these to /StructureDefinition/yourSD to make Vonk use it for validation. You can now feed them to Vonk through the new Administration API, still by doing a FHIR update interaction.

Finally, you can specify a list of profiles in the settings so that Vonk will only allow resources conforming to any of those profiles.


2017-09-19 11.38.51Although a subscription to fresh coffee while riding is not really feasible, you now can subscribe to resource changes in Vonk. How this works is described in the FHIR specification, but the gist of it is that you post a Subscription resource to the Vonk Administration API. The resource specifies criteria for which resources you would like to receive notifications, and a channel through which you want to get those notifications.

Current limitations are that you can only specify a channel of type rest-hook, and changes on referenced resources do not ‘bubble up’. And although subscriptions are evaluated asynchronously, we did not yet build it for large numbers of subscriptions. See the documentation for details.

Find more

If you search for birds, you need binoculars (or better: a telescope). If you look for img-20170919-wa0013.jpgresources, you need as many features as possible. So we introduced custom search parameters. You can specify several sources (the .Net API, a zip file, a directory) with custom search parameters at startup. After that, new or updated resources will be indexed for these new parameters, and you can use them in your search interactions. Existing resources can be re-indexed through the Administration API, in order to find them on the new parameters as well.

Besides that, we implemented support for _list, _has, _elements, _type and _revinclude. And we improved on :missing, :exact and datetime handling. Finally, we moved to using the FHIRpath Expression in the SearchParameter resource to extract the indexed data, meaning we can now handle all the search parameters in the spec (and probably all your custom ones as well). A full list of improvements is provided in the release notes.

Go faster

I made my bike slightly faster by making it lighter: I removed the fixed dynamo and headlamp as I use a much better LED lamp if I have to. We also made Vonk faster, but by adding things: indexes! Both the SQL and MongoDb database implementations could use a few extra indexes, since reading and searching is used a lot more then update and create.

Have your partner make lunch

Do you know the happy feeling when you find a nice lunch in your bag, readymade by your partner? You can now pre-load Vonk with a set of resources as well, and make the users as happy. This is particularly useful if Vonk is used as a reference server for testing between communication partners. You use it by posting a zipfile with resources through the Administration API (/administration/preload). This interface is not meant for bulk loading really large sets of resources.

Start your ride easier

We have had a lot of positive feedback on the ease of deployment of Vonk. But we felt it could still be easier. So we removed the step of creating your SQL database yourself. If you allow Vonk to do so, it will automatically create databases for both the regular operations and the Administration API. For MongoDb this already happened, and for the Memory implementation this is obviously not an issue.

Beware if you have a database for the previous version of Vonk (0.2.*). It cannot be automatically updated. We will contact customers with professional support to upgrade their databases. You can check your version of Vonk in the CapabilityStatement.


You need a license to run Vonk. On you can download an evaluation license. This will grant you usage of Vonk during 30 days, and it requires you to restart Vonk every 12 hours. If you need more evaluation time, you can simple download a renewed evaluation license. If you need to test without the 12 hour limit, please contact us.


If you’re interested in using Vonk for production purposes, visit our pricing pages. We are still working towards a usage based pricing model and in the meantime you can enroll in our Launching Customer Program. Early customers are rewarded with the choice to stay in this pricing model once we have implemented the usage based pricing model.

What’s new in Simplifier 16.5?

By Lilian Minne

The latest Simplifier release includes a couple of pretty nice improvements in implementation guides, workflows and GitHub Webhook. We also added the ability to directly edit resources in XML from Simplifier. In this article we will tell you all about the latest release.

Implementation guides

Let’s start with the implementation guides. While Simplifier was originally designed for uploading and downloading publications, the idea of working with projects and organizations followed later. IG’s were treated as separate parts, which were not linked to a specific project. In this new release, IG’s are always part of a project. A new tab is visible at each project where you can find the project’s IG, as shown in the screen picture below. The old IG’s will still be available, however new IG’s cannot be created outside a project any longer.

ImplementationGuide in Project
Implementation Guides of a project.

Moreover, we introduced a new storage system for IG’s. Your IG is now stored as separate mark down files in your project. This has a lot of advantages. It is now possible to access them as separate resources, add issues to them and check version history (more on this later). In addition, your IG’s will now be available in your GitHub repository and can be downloaded in a ZIP file together with the other files of your project.


IG tree example

IG tree in the IG editor.

To illustrate how this works, see the screen picture of an example IG containing two chapters called ‘First part’ and  ‘Second part’. The First part also contains a child called ‘Child of first part’.

The different parts of the IG are now accessible from the Resources tab in your project as well as from the search engine. Two categories called Text and Image are added. To search for IG parts, just check the Texts box.

New Test resources

Resources inside a project.

The IG parts can be accessed in the same way as a Resource. From the Issues tab you can create new issues that specifically address this IG part. From the History tab you can access its version history. In this way you can go back to an earlier version and directly edit it from here. It is even possible to compare two different versions by selecting the boxes of the versions you want to compare.

Version history

Version history of a resource.

Custom workflow

In addition to the improvements on IG’s, we enhanced the custom workflow. With custom workflow you can add additional statuses besides the standard FHIR statuses. For example, if you want to be able to explicitly state that your resource is ready for review, you could add a Review status. In the new release, it is possible to click on the status of a resource to see all possible statuses and their explanations, as shown in the screen pictures below.

Resource tab - custom workflow

Resource tab inside a project.

Custom workflow overview

Custom workflow overview.

GitHub Webhook

The last big improvement to an existing feature goes for the GitHub Webhook. The Webhook is now one of the most used additional features of Simplifier. Overtime, some users have reported issues with complex merges. To address these reports, we have refactored the whole GitHub Webhook functionality in Simplifier, which is pretty complex stuff. We now use a Git engine to calculate the differences, which results in much more reliable updates of projects that contain a lot of branches and complex merges. These changes significantly improved the reliability and predictability of GitHub Webhook.

Edit resources and copy code to clipboard

My personal favorite is the new feature that enables you to directly edit your resources within Simplifier by editing the XML code. Just click Update and choose the Edit option to open a “simplified” XML-editor. Pretty cool, right?

Edit resources by copy&paste

Update a resource.

Edit xml

Edit the XML of a resource directly in Simplifier.

From the download button it is also possible to copy the complete XML or JSON code of a resource to your clipboard.

Work in progress

Work in progress, user profile

Personal menu.

We are working on something we like to call Facebookification. From your personal menu you can now access your User Profile. This profile is public, so other users can access it as well. In the future, we plan to add more functionality for personalizing your profile. Also new from this menu is that you can access a separate Invites page which shows your pending invites for Simplifier projects.

The way forward

To give you a taste of the cool stuff that we are planning for the future (no guarantees of course), here are some of our ideas:

  • Import and export of IG resources
  • Add intelligence in the IG editor (e.g. placeholders)
  • Comparison of history of pages of IG’s
  • Internal references to images in the same project (note that external references are already possible)

Of course we are always open to new ideas as well, so don’t hesitate to use the feedback link on Simplifier.

Element Identifiers in FHIR

In this article, we’ll take a closer look at element identifiers in FHIR, the relevant changes introduced by STU3 and the reasons that motivated that change.

FHIR has supported element identifiers since DSTU1. They are intended to specify unique identifiers for the elements within an individual resource. The official definition from the STU3 specification states: “The id property of the element is defined to allow implementers to build implementation functionality that makes use of internal references inside the resource. This specification does not use the internal id on the element in any way.

Element ids are particularly convenient to identify the individual ElementDefinitions inside of a StructureDefinition. In a basic StructureDefinition without slicing, each element definition has a unique path. However when a profile introduces slicing constraints, element paths are no longer unique, as the following example demonstrates:

Element name Slice name Element path
Patient Patient
identifier Patient.identifier
identifier ssn Patient.identifier
system Patient.identifier.system
identifier ehr Patient.identifier
system Patient.identifier.system

Clearly, the element path itself is not sufficient to uniquely identify individual element definitions, as we also require information about slice name(s). But if we would also include the slice name(s) in the expression, then the resulting value is no longer ambiguous and becomes a unique identifier:

Element name Slice name Element identifier
Patient Patient
identifier Patient.identifier
identifier ssn Patient.identifier:ssn
system Patient.identifier:ssn.system
identifier ehr Patient.identifier:ehr
system Patient.identifier:ehr.system

In order to support this, FHIR STU3 introduces some changes in the definition of element ids. The following table compares the specification of element ids in different FHIR versions:

FHIR version data type max length
DSTU 1 id 36 characters
DSTU 2 id 64 characters
STU 3 string 1MB

In STU3, the element identifier datatype has changed from id to string, effectively removing the maximum length constraint. Also, STU3 allows any string value that does not contain spaces, whereas in earlier versions, the set of valid characters was limited to A-Z | a-z| 0-9 | - | .

FHIR does not specify a mandatory format for element identifiers. In STU3, any unique non-empty string value without spaces is considered to be a valid identifier. However STU3 does introduce a preferred format for the identifiers of ElementDefinitions in a StructureDefinition resource:


Similar to the element path, the preferred identifier format specifies a number of path segments, separated by a dot “.” character. Each segment represents an individual element in the hierarchy and starts with the element name, optionally followed by a semicolon “:” character and the associated slice name (if not empty).

The preferred element identifier value only depends on the position of the element in the element tree. Different available representations of a specific ElementDefinition in both the differential and the snapshot component share the exact same identifier value. As the astute reader may have already noticed, this implies that element identifiers in the preferred format are actually not fully unique within the context of the containing StructureDefinition resource. Also, if we include multiple StructureDefinitions in a Bundle resource, then ElementDefinition identifiers of are not guaranteed to be unique within the context of the containing Bundle resource.

Note: When a FHIR resource is serialized to the XML representation, FHIR element identifiers are expressed as xml:id attributes. According to the W3C specification, “the [identifier] value is unique within the XML document”. So in fact, the FHIR specification violates the W3C XML specification… However in practical situations, this idiosyncrasy of FHIR shouldn’t pose an issue.

In general, software cannot assume that FHIR element identifiers follow the preferred format. The FHIR specification itself does not use the internal id on the element in any way (1). For ElementDefinitions contained in a StructureDefinition resource, the element name and the slice name remain to be the leading identifying attributes for processing logic to act on. This also implies that a sparse differential component should always include parent elements with a non-empty slice name, even if they are unconstrained. In theory, processing logic could reconstruct the parent element hierarchy by parsing the element identifiers in the differential component, provided that all identifiers are specified in the preferred format. However as the preferred identifier format is not required, generic logic cannot rely on this information.

Nonetheless, the standard open source Java and the .NET libraries for FHIR STU3 both provide implementations of a snapshot generator that can generate element ids in the preferred format. So within a clearly defined use context that guarantees standardized element identifiers to be present, e.g. because all snapshots are always (re-)generated by a standard FHIR library, it becomes possible to implement processing logic that acts on the standard identifiers.

Forge, the FHIR profile editor, introduced preliminary support for element identifiers as part of the initial STU3 release from May 2017. Initially, Forge allowed users to specify custom identifiers, however this feature has been deprecated since. As of release 16.4 for STU3, Forge will automatically generate element identifiers in the preferred format on all ElementDefinitions in both the differential and snapshot components. Users can not manually edit the generated identifiers.

1. ^ As Grahame points out in his comment, the FHIR standard actually does use element id’s as the target of content references. However these references do not rely on the format of the identifier.