Versioning and canonical urls


At last week’s FHIR Developer Days in Amsterdam, we had a highly enjoyable break-out session on the use of canonical urls when taking versioning into consideration.

The issue had been popping up more often recently, and we, as the core team, had been pushing off trying to solve it until we had better understanding of the problem. But here we were: we had a room full of knowledgeable people and enough experience to take a stab at a solution.

For those not present, I’d like to summarize what we discussed, seamlessly turning into what I think needs to happen next.

The problem

Let me start off by giving you an overview of the problem.
The two key players in the game are the canonical url and the canonical reference. The canonical url is a special identity present on most of our conformance resources (StructureDefinition, ValueSet and the like). This identity is used by a canonical reference, present in StructureDefinition (e.g. StructureDefinition.base) and FHIR instances (Resource.meta.profile), which is used to refer to other StructureDefinitions (or any other conformance resource). For example, the StructureDefinition for the localized version of Address in Germany starts like this:

    <url value="" />
    <name value="address-de-basis" />
    <title value="Adresse, deutsches Basisprofil" />

Here, the <url> contains the canonical url of the address-de-basis StructureDefinition.

This canonical url can be referenced by other parts of your specification, as for example here at Patient.address in the StructureDefinition of the German version of Patient:

    <path value="Patient.address" />
    <short value="Adresse nach deutschem Profil" />
        <code value="Address" />
        <profile value="" />

This is quite comparable to what happens in your favorite programming language, and then referring to it when you are declaring an instance variable of class member:

public class GermanAddress

public class GermanPatient
    public GermanAddress Address { get; }

As a good member of the FHIR community, you have published all your profiles to a registry (these examples come from the German Basisprofil on Simplifier), and people are happily validating their Patient instances against it. Some of the instances may even have gotten tagged by a profile claim:

            <profile value="">
        <!-- rest of the Patient's data -->

Before long, this canonical reference gets baked into both instances in databases and software using your profile.

And then, the day comes that you are required to make changes to your profile. Breaking changes in fact. You are circling your brand new version of the profile amongst your colleagues, everyone is updating their test-instances and all works fine. Until you publish the new version on the registry. Once a copy of your StructureDefinition starts tickling down to servers around the country, previously correct instances of the German Patient will now start to fail validation. You realize you have broken the contract you had with your users: stuff that was valid before has now -without the end-users doing anything- become broken.

More subtly, if your breaking change was to the address-de-basis profile, this turns out to be a breaking change to all profiles depending on address-de-basis, which then proliferate up all the way to the instances.

A simple solution

It is easy to see what you should have done: you (and all your users) should have versioned both the canonical url and the canonical reference! So,

    <url value="" />


    <code value="Address" />
    <profile value="" />

and finally

    <profile value="">

This way, when we publish a new version we may change the canonical url and update it to end in v2, and all existing materials will remain untouched and valid.
Of course, minor changes are OK, so if we all agree to stick to semver, and only change our canonical url when a major version number changes, we’re doing fine. We formalized this approach in the STU3 version of the FHIR specification, by adding a <version> element to StructureDefinition:

    <url value="" />
    <version value="0.1" />

and then allowing you to tag canonical references with a | and a version number like so:

    <code value="Address" />
    <profile value="|0.1" />

Ok. Done. We published STU3 and hoped the problem was solved.

But it wasn’t. Well, technically, it was – but there are a few practical complications:

  • We had not followed this practice ourselves in the specification (leaving untouched for years across STUs), and neither did the most prominent early Implementation Guides (like those from Argonaut). We set bad examples that turned out to be the path of least resistance as well. Guess what happens.
  • It just feels wrong to hard-wire version numbers in references within a set of StructureDefinitions and Implementation Guides you author. If you publish them as a version-managed, coherent set of StructureDefinitions, it is obvious that you’d like to reference the version of the StructureDefinition published at the same time as the referring StructureDefinition in that same set.
  • If you need to bump the version in a canonical url in the set that you publish, you need to be really sure you update all references to it in that set. And then (as we saw above) update the canonical url of all referring StructureDefinitions, and so on and so on. If you fail to do this, you end up in a situation where part of your definitions are using one version and another part another version. Granted, we could find someone who thinks that’s a feature, but I am sure most would disagree.

Better tooling support for authors could ease this job and help sticking to versioned references, but I kept having the nagging feeling something was not right. This was strengthened by the fact that this is not how we commonly version in other development environments:
continuing our parrallel with programming concepts, versioning the canonical url would be comparable to versioning our class names:

public class GermanAddress2 { ... }
public class GermanName4 { ... }

public class GermanPatient2
    public GermanAddress2 Address { get; }
    public GermanName4 Name { get; }

It is not like we’ve never seen this before, but that’s really only done if you want to keep incompatible versions of the same class within the same codebase, because you still need access to both (e.g. to do mapping).

At the same time, we were looking at how users of Simplifier and authors of implementation guides organized their projects and how they wanted versioning to work. It turns out that StructureDefinitions simply do not live on their own, much like Java classes are not shared and distributed in isolation. They are authored and shipped in sets (like implementation guides), and are versioned in those sets. Of course, they may still use canonical references to materials outside the set, and you’d need to tightly version those references, but inside their set, they simply mean to point to each other.

You don’t need to look around long at how other parts of the industry have solved this to realize that we need the concept of a “package”, much like you would package up your classes and Javascript files into zips, jars or whatever and ship them as npm, maven or NuGet packages.

Packages to the rescue

If you are not familiar with these packaging mechanism, I’ll call out a few properties of packages, to see how they solve our problems:

  • Packages contain artifacts that are managed by a small group of authors, who prepare them as a consistent set and publish them as an indivisible unit under the same version moniker.
  • A package is published on a registry and has a name and a version, the combination of which is enforced to be unique within that registry. You cannot overwrite a package once it has become published.
  • Packages explicitly state on which version of which other packages they depend, and contain configuration information on how to handle version upgrades and mismatches. Additionally, the user of a packages may override how version dependencies are resolved, even enforcing the use of an older or newer version of the dependency when desired.

Packages are usually just normal zip archives with all artifacts you like to publish, with an additional “configuration file” added to the zip:

  "name": "",
  "version": 1.0.4,
    "": ">3.1.4"

This has considerable advantages for the users:

  • When a user downloads a package, it contains “everything you need” to start working with StructureDefinitions within it, instead of having to download individual artifacts one by one from a registry.
  • The user also has the confidence that these artifacts are meant to be used together, which is not obvious if you are looking at a huge list of StructureDefinition on the registry.
  • A package manager will ensure that if you download one package for use in your authoring environment or machine, all dependencies will also be retrieved, and you would not encounter errors due to missing references.

As an author of a package of conformance resources this means that you no longer need to version your canonical references: they are interpreted to mean to refer to the resources inside your package. This is even true for canonical references to StructureDefinitions and ValueSets outside your package, since you as an author explicitly declare these dependencies and version them at the package level and no longer at each individual reference. Upgrading to a new version of a dependency is now completely trivial.

It might even solve another long standing wish expressed by authors: you could configure “type aliases” within your package, stating that every reference to a certain core type (or profile) within the package is translated to a reference to another type. If you wonder why that is useful, consider the poor German authors who defined a German Address (address-de-basis) and then needed to go over all of their StructureDefinitions to replace references to the “standard” core Address with their localized version. It’s pretty comparable to redirecting conformance references to a specific version of a StructureDefinition, so we can solve this in one go.

Concrete steps

Looking at the situation today, I suggest we do the following:

  • Remove <version> from StructureDefinition, and obsolete the use of | in references (though hard version references might still have a use).
  • Decide what we need as a syntax to define packages and declare dependencies. We could leverage existing mechanisms (package.json being a prime candidate) or integrate it into the existing ImplementationGuide resource (which would enable existing registries to simply use a FHIR endpoint to list the packages).
  • Enhance the authoring workflow in the current tools to allow the authors to create packages when they want to publish their IGs, fixing the external dependencies to specific versions.
  • Create (or re-use) package managers to enable our users to download packages and their dependencies (and turn make our registry compatible with those tools).

Are we done? No. There are two more design decisions (at least) to make. Let’s start with the easiest one:

It is apparent that there is a relationship to our concept of an “Implementation Guide” and a package. And we have to figure out exactly what that relationship is. I feel an Implementation Guide is a kind of package, containing the conformance resources, html files, images, etcetera that form the IG. But this also means there will be packages that are not ImplementationGuides. If we decide that the ImplementationGuide resource is the home of the “configuration part” of a package, we will need to rename ImplementationGuide to reflect its new scope.

But I saved the most impactful consequence for last:

You can no longer interpret canonical references present in conformance resources or instances outside of the context of a package.

Let me reiterate that: any authoring tool, validator, renderer or whatever system that uses StructureDefinitions to do its work will need to know this context. Those who have carefully studied the current ImplementationGuide resource (especially realize this is already the case now, but most are blissfully unaware of this hidden feature.

For systems working with conformance resources (like instance validators), it’s likely they have this context: if you’re dealing with a StructureDefinition, you probably got it from a package in the first place (it becomes a different matter entirely if a resource can be distributed in different packages. Well, let’s not diverge.)

For servers exchanging instances however – we’d need to assume they know the context out of band. But this won’t do for partners exchanging outside of such a controlled environment. For this let me suggest to summon an old friend, hidden in a dark corner of the FHIR specification: Resource.implicitRules

The specification states:

Resource.implicitRules is a reference to a set of rules that were followed when the resource was constructed, and which must be understood when processing the content

that sounds about right, however, it continues:

Wherever possible, implementers and/or specification writers should avoid using this element.

And sparingly lists the reasons why we shouldn’t. I suggest we take a fresh look at this element and see whether the element we thought up so many years ago may find use for it after all.


We’re not done yet, and I admit I am not sure I’ve dealt with all the consequences of this versioning solution here, but it has one thing going for it: we’re profiting from solutions thought up by the rest of the industry who have dealt with this before. But is an exchange standard and it’s artifacts completely comparable to a software package? Will we be able to fix the problem of communicating package context?

I don’t know yet, but the more I think about the uses of packages and how it eases the life of authors and users of implementation guides, the more I think we should pursue this a bit in online discussions. Hope they are as engaging as the one at FHIR DevDays!

Tune in to our new Profiling Academy and become a profiling expert yourself!

By Lilian Minne – At the DevDays 2017, we launched our new product: the Profiling Academy. The Profiling Academy is available for all Simplifier users (free to join) and is meant for anyone willing to learn more about FHIR profiling. We aim to share our knowledge in a way that is easy to digest for all levels of FHIR users: beginner, moderate and advanced.

The Profiling Academy offers short, digestible modules covering one topic each. Each module offers reading material, real-life examples and exercises.  At this moment the following modules are available: Start Profiling, Extensions, Slicing and Best-Practices. More modules will be added in the near future. If you want to follow a module, just click on its name or use the menu in the upper navigation bar.

Profiling academy

Don’t forget to visit the other pages as well:

  • Feature movies: Watch interesting movies explaining some of the features of our products.
  • Helpful links: This page contains useful links when working with FHIR.
  • Meet our team: Get to know our Profiling Team members who are happy to introduce themselves.
  • About FHI: Learn more about our company and how to get in touch.

We are curious to know how you feel about the Profiling Academy. As the Profiling Academy was built using the IG-editor in Simplifier (pretty cool, right?), please leave your comments in the Issue Tracker of the project.

Make your first FHIR client in R – within one hour!


R on FHIR is the latest addition to our FHIR tool suite. It is available on The Comprehensive R Archive Network (CRAN). R on FHIR is, as you can tell from the name, an R package that supports R users with fetching data from FHIR servers. It provides simple, but powerful tools to perform read, version read and search interactions on FHIR servers and fetch the resulting resources in an R friendly format.

R on FHIR consists of two classes called fhirClient and searchParams, just like our .NET API. The fhirClient provides functions to perform read and search operations and to use the FHIR paging mechanism to navigate around a series of paged result Bundles. The searchParams class provides a set of fluent calls to allow you to easily construct more complex queries. The searchParams class falls outside the scope of this blog, but you can read the documentation on R on FHIR  to learn more about how to use this class. The documentation also includes some examples for both classes.

Step 0 – Install R (and RStudio)

Before we start, we need to make sure we have R with preferably RStudio installed, on our machine. See the links below. For this blog I used R version 3.2 (higher versions will also work, lower I think too, but I did not test that) and RStudio version 1.0.153.

Step 1 – Install and load R on FHIR

FHIR up (first and last pun, I promise) your RGui or RStudio and execute the following  statement in your console:

# This will install R on FHIR from The Comprehensive R Archive Network.
> install.packages("RonFHIR")

Step 2 – Create a new R script

In this blog we are going to write a function which returns an overview of a population based on a postal code. Go to File > New File > R Script or hit Ctrl+Shift+N in RStudio and for RGUI File > New script. Now that we have an empty script we can start writing our function called ‘getAreaInfo’ with the postal code as our parameter. On the first line of our function we create an instance of a fhirClient. Your script should look like this:

# This function load the R on FHIR package from the library 
# where R stores its packages.

getAreaInfo <- function(postalCode){
 client <- fhirClient$new("")

Step 3 – Searching Patients on a FHIR server

Now that we initialized a client we can start performing a search operation on a server. The most basic search is the fhirClient’s $search. It searches all resources of a specific type based on zero or more criteria. The $search function will return a Bundle as a list containing the found ResourceEntries as a data.frame.  Normally, any FHIR server will limit the number of search results returned. To obtain all results we can use $continue which uses the FHIR paging mechanism to navigate around a series of paged result Bundles. In our script below we used the $search and $continue functions to find all Patients living within the given postal code and return how the genders are distributed.


getAreaInfo <- function(postalCode){
 client <- fhirClient$new("")
 postalQuery <- paste("address-postalcode=", postalCode, sep = "")
 bundle <- client$search("Patient", c(postalQuery, "address-use=home"))
 genders <- c()
 genders <- c(genders, bundle$entry$resource$gender)
 bundle <- client$continue(bundle)

Now that our function is complete we can call it from our console, but first we have to tell R to source our code. This can be done with Code > Source or Ctrl+Shift+S in RStudio and File > Source R code… in RGUI. Now we can run our ‘getAreaInfo’ function in our R console.

> getAreaInfo(3999)
female   male 
     1    574

Congratulations! You just created your first function using a FHIR client in R to track the gender distribution in a given postal area.


This blog only showed a quick example of how you can use R on FHIR. For more details and examples you can always consult the documentation. Feel free to give feedback and join the R on FHIR development at our GitHub page.

We are planning to do a birds of a feather session during the HL7 FHIR DevDays here in Amsterdam. Hope to see you there.

Vonk 0.3: subscriptions and more


Recently we released an upgrade of the enterprise FHIR Server named Vonk. This release 0.3 contains features like custom search parameters, subscriptions and an Administration API. For a complete overview of new features, see the release notes. You can try for yourself at or download a trial version on

Remember my bike?

It is strong, durable and simple. But if you are going to take a long ride, you need stuff. Hot coffee, raincoat and of course binoculars to check all the birds. Thaortlieb-back-roller-black-n-white-pannier-pair-white-black-EV229664-9085-9.jpgt means hanging a back roller to it. And that is what we did with Vonk as well: add a back roller full of new features.

Protection from the elements

What a raincoat is to a cyclist, validation can be to Vonk. Just bare validation was already possible (and still is, on /<resourcetype>/$validate), but now you can also protect your data by validating of all resources that are created or updated. And if they fail validation, they are rejected.

That is nice, but you probably have made your own custom profiles (in the form of StructureDefinitions). Of course hosted on 🙂 Previously you had to send these to /StructureDefinition/yourSD to make Vonk use it for validation. You can now feed them to Vonk through the new Administration API, still by doing a FHIR update interaction.

Finally, you can specify a list of profiles in the settings so that Vonk will only allow resources conforming to any of those profiles.


2017-09-19 11.38.51Although a subscription to fresh coffee while riding is not really feasible, you now can subscribe to resource changes in Vonk. How this works is described in the FHIR specification, but the gist of it is that you post a Subscription resource to the Vonk Administration API. The resource specifies criteria for which resources you would like to receive notifications, and a channel through which you want to get those notifications.

Current limitations are that you can only specify a channel of type rest-hook, and changes on referenced resources do not ‘bubble up’. And although subscriptions are evaluated asynchronously, we did not yet build it for large numbers of subscriptions. See the documentation for details.

Find more

If you search for birds, you need binoculars (or better: a telescope). If you look for img-20170919-wa0013.jpgresources, you need as many features as possible. So we introduced custom search parameters. You can specify several sources (the .Net API, a zip file, a directory) with custom search parameters at startup. After that, new or updated resources will be indexed for these new parameters, and you can use them in your search interactions. Existing resources can be re-indexed through the Administration API, in order to find them on the new parameters as well.

Besides that, we implemented support for _list, _has, _elements, _type and _revinclude. And we improved on :missing, :exact and datetime handling. Finally, we moved to using the FHIRpath Expression in the SearchParameter resource to extract the indexed data, meaning we can now handle all the search parameters in the spec (and probably all your custom ones as well). A full list of improvements is provided in the release notes.

Go faster

I made my bike slightly faster by making it lighter: I removed the fixed dynamo and headlamp as I use a much better LED lamp if I have to. We also made Vonk faster, but by adding things: indexes! Both the SQL and MongoDb database implementations could use a few extra indexes, since reading and searching is used a lot more then update and create.

Have your partner make lunch

Do you know the happy feeling when you find a nice lunch in your bag, readymade by your partner? You can now pre-load Vonk with a set of resources as well, and make the users as happy. This is particularly useful if Vonk is used as a reference server for testing between communication partners. You use it by posting a zipfile with resources through the Administration API (/administration/preload). This interface is not meant for bulk loading really large sets of resources.

Start your ride easier

We have had a lot of positive feedback on the ease of deployment of Vonk. But we felt it could still be easier. So we removed the step of creating your SQL database yourself. If you allow Vonk to do so, it will automatically create databases for both the regular operations and the Administration API. For MongoDb this already happened, and for the Memory implementation this is obviously not an issue.

Beware if you have a database for the previous version of Vonk (0.2.*). It cannot be automatically updated. We will contact customers with professional support to upgrade their databases. You can check your version of Vonk in the CapabilityStatement.


You need a license to run Vonk. On you can download an evaluation license. This will grant you usage of Vonk during 30 days, and it requires you to restart Vonk every 12 hours. If you need more evaluation time, you can simple download a renewed evaluation license. If you need to test without the 12 hour limit, please contact us.


If you’re interested in using Vonk for production purposes, visit our pricing pages. We are still working towards a usage based pricing model and in the meantime you can enroll in our Launching Customer Program. Early customers are rewarded with the choice to stay in this pricing model once we have implemented the usage based pricing model.

What’s new in Simplifier 16.5?

By Lilian Minne

The latest Simplifier release includes a couple of pretty nice improvements in implementation guides, workflows and GitHub Webhook. We also added the ability to directly edit resources in XML from Simplifier. In this article we will tell you all about the latest release.

Implementation guides

Let’s start with the implementation guides. While Simplifier was originally designed for uploading and downloading publications, the idea of working with projects and organizations followed later. IG’s were treated as separate parts, which were not linked to a specific project. In this new release, IG’s are always part of a project. A new tab is visible at each project where you can find the project’s IG, as shown in the screen picture below. The old IG’s will still be available, however new IG’s cannot be created outside a project any longer.

ImplementationGuide in Project
Implementation Guides of a project.

Moreover, we introduced a new storage system for IG’s. Your IG is now stored as separate mark down files in your project. This has a lot of advantages. It is now possible to access them as separate resources, add issues to them and check version history (more on this later). In addition, your IG’s will now be available in your GitHub repository and can be downloaded in a ZIP file together with the other files of your project.


IG tree example

IG tree in the IG editor.

To illustrate how this works, see the screen picture of an example IG containing two chapters called ‘First part’ and  ‘Second part’. The First part also contains a child called ‘Child of first part’.

The different parts of the IG are now accessible from the Resources tab in your project as well as from the search engine. Two categories called Text and Image are added. To search for IG parts, just check the Texts box.

New Test resources

Resources inside a project.

The IG parts can be accessed in the same way as a Resource. From the Issues tab you can create new issues that specifically address this IG part. From the History tab you can access its version history. In this way you can go back to an earlier version and directly edit it from here. It is even possible to compare two different versions by selecting the boxes of the versions you want to compare.

Version history

Version history of a resource.

Custom workflow

In addition to the improvements on IG’s, we enhanced the custom workflow. With custom workflow you can add additional statuses besides the standard FHIR statuses. For example, if you want to be able to explicitly state that your resource is ready for review, you could add a Review status. In the new release, it is possible to click on the status of a resource to see all possible statuses and their explanations, as shown in the screen pictures below.

Resource tab - custom workflow

Resource tab inside a project.

Custom workflow overview

Custom workflow overview.

GitHub Webhook

The last big improvement to an existing feature goes for the GitHub Webhook. The Webhook is now one of the most used additional features of Simplifier. Overtime, some users have reported issues with complex merges. To address these reports, we have refactored the whole GitHub Webhook functionality in Simplifier, which is pretty complex stuff. We now use a Git engine to calculate the differences, which results in much more reliable updates of projects that contain a lot of branches and complex merges. These changes significantly improved the reliability and predictability of GitHub Webhook.

Edit resources and copy code to clipboard

My personal favorite is the new feature that enables you to directly edit your resources within Simplifier by editing the XML code. Just click Update and choose the Edit option to open a “simplified” XML-editor. Pretty cool, right?

Edit resources by copy&paste

Update a resource.

Edit xml

Edit the XML of a resource directly in Simplifier.

From the download button it is also possible to copy the complete XML or JSON code of a resource to your clipboard.

Work in progress

Work in progress, user profile

Personal menu.

We are working on something we like to call Facebookification. From your personal menu you can now access your User Profile. This profile is public, so other users can access it as well. In the future, we plan to add more functionality for personalizing your profile. Also new from this menu is that you can access a separate Invites page which shows your pending invites for Simplifier projects.

The way forward

To give you a taste of the cool stuff that we are planning for the future (no guarantees of course), here are some of our ideas:

  • Import and export of IG resources
  • Add intelligence in the IG editor (e.g. placeholders)
  • Comparison of history of pages of IG’s
  • Internal references to images in the same project (note that external references are already possible)

Of course we are always open to new ideas as well, so don’t hesitate to use the feedback link on Simplifier.

Element Identifiers in FHIR

In this article, we’ll take a closer look at element identifiers in FHIR, the relevant changes introduced by STU3 and the reasons that motivated that change.

FHIR has supported element identifiers since DSTU1. They are intended to specify unique identifiers for the elements within an individual resource. The official definition from the STU3 specification states: “The id property of the element is defined to allow implementers to build implementation functionality that makes use of internal references inside the resource. This specification does not use the internal id on the element in any way.

Element ids are particularly convenient to identify the individual ElementDefinitions inside of a StructureDefinition. In a basic StructureDefinition without slicing, each element definition has a unique path. However when a profile introduces slicing constraints, element paths are no longer unique, as the following example demonstrates:

Element name Slice name Element path
Patient Patient
identifier Patient.identifier
identifier ssn Patient.identifier
system Patient.identifier.system
identifier ehr Patient.identifier
system Patient.identifier.system

Clearly, the element path itself is not sufficient to uniquely identify individual element definitions, as we also require information about slice name(s). But if we would also include the slice name(s) in the expression, then the resulting value is no longer ambiguous and becomes a unique identifier:

Element name Slice name Element identifier
Patient Patient
identifier Patient.identifier
identifier ssn Patient.identifier:ssn
system Patient.identifier:ssn.system
identifier ehr Patient.identifier:ehr
system Patient.identifier:ehr.system

In order to support this, FHIR STU3 introduces some changes in the definition of element ids. The following table compares the specification of element ids in different FHIR versions:

FHIR version data type max length
DSTU 1 id 36 characters
DSTU 2 id 64 characters
STU 3 string 1MB

In STU3, the element identifier datatype has changed from id to string, effectively removing the maximum length constraint. Also, STU3 allows any string value that does not contain spaces, whereas in earlier versions, the set of valid characters was limited to A-Z | a-z| 0-9 | - | .

FHIR does not specify a mandatory format for element identifiers. In STU3, any unique non-empty string value without spaces is considered to be a valid identifier. However STU3 does introduce a preferred format for the identifiers of ElementDefinitions in a StructureDefinition resource:


Similar to the element path, the preferred identifier format specifies a number of path segments, separated by a dot “.” character. Each segment represents an individual element in the hierarchy and starts with the element name, optionally followed by a semicolon “:” character and the associated slice name (if not empty).

The preferred element identifier value only depends on the position of the element in the element tree. Different available representations of a specific ElementDefinition in both the differential and the snapshot component share the exact same identifier value. As the astute reader may have already noticed, this implies that element identifiers in the preferred format are actually not fully unique within the context of the containing StructureDefinition resource. Also, if we include multiple StructureDefinitions in a Bundle resource, then ElementDefinition identifiers of are not guaranteed to be unique within the context of the containing Bundle resource.

Note: When a FHIR resource is serialized to the XML representation, FHIR element identifiers are expressed as xml:id attributes. According to the W3C specification, “the [identifier] value is unique within the XML document”. So in fact, the FHIR specification violates the W3C XML specification… However in practical situations, this idiosyncrasy of FHIR shouldn’t pose an issue.

In general, software cannot assume that FHIR element identifiers follow the preferred format. The FHIR specification itself does not use the internal id on the element in any way (1). For ElementDefinitions contained in a StructureDefinition resource, the element name and the slice name remain to be the leading identifying attributes for processing logic to act on. This also implies that a sparse differential component should always include parent elements with a non-empty slice name, even if they are unconstrained. In theory, processing logic could reconstruct the parent element hierarchy by parsing the element identifiers in the differential component, provided that all identifiers are specified in the preferred format. However as the preferred identifier format is not required, generic logic cannot rely on this information.

Nonetheless, the standard open source Java and the .NET libraries for FHIR STU3 both provide implementations of a snapshot generator that can generate element ids in the preferred format. So within a clearly defined use context that guarantees standardized element identifiers to be present, e.g. because all snapshots are always (re-)generated by a standard FHIR library, it becomes possible to implement processing logic that acts on the standard identifiers.

Forge, the FHIR profile editor, introduced preliminary support for element identifiers as part of the initial STU3 release from May 2017. Initially, Forge allowed users to specify custom identifiers, however this feature has been deprecated since. As of release 16.4 for STU3, Forge will automatically generate element identifiers in the preferred format on all ElementDefinitions in both the differential and snapshot components. Users can not manually edit the generated identifiers.

1. ^ As Grahame points out in his comment, the FHIR standard actually does use element id’s as the target of content references. However these references do not rely on the format of the identifier.

Simplifier on STU3

by Martijn Harthoorn

As of today Simplifier supports FHIR STU3. Our team has been working hard to give you a FHIR registry that can read, write, listen and speak FHIR STU3 resources. Now you can start uploading STU3 resources.

The change in the FHIR specification from DSTU2 to STU3 itself is not all that big. And had Simplifier been an offline tool, it would have been an overseeable load to upgrade to the new version.

But we have been on a bigger journey. Because we have thousands of profiles and other resources from the whole FHIR community and from over the whole world that are all DSTU2.

And so we didn’t just upgrade Simplifier to STU3, but rather made Simplifier multilingual. Simplifier now supports both DSTU2 and STU3.

From the beginning of January, right after the San Antonio connectathon,  until the end of March, our team has been busy upgrading Simplifier to a new framework (ASP.NET core) to make it more stable and much faster (pages resolve up to 20 times faster!). This allowed us to take the next step: to be able to host multiple FHIR engines inside.

All features are now able to switch the FHIR engine from under their seat depending on the version of the resource they are dealing with. This includes uploading, downloading, the GitHub webhook, the history compare, the FHIR endpoints.

Whats the difference for you?
Each project and each resource now has a FHIR version. In the past we kept numbers like 0.50 (which is actually DSTU1) and 1.0.2 (which is DSTU2). And we only used these numbers for resources. These numbers were set by the version of the internal FHIR engine we used. All these number have now been replaced by “DSTU1”, “DSTU2”, or “STU3”.

Plus, the versions are no longer set by Simplifier, but by you. You can set your project to a FHIR version. And all resources you upload will be read as belonging to that version. You can always switch the FHIR version of your project and nothing will happen to your resources. They will keep the version they already had.

In the near future we will make it possible for you to manually change the FHIR version for the resource too. This goes for all users, whether you have a free account or a paid account.

Simplifier as a FHIR server
As of today the Simplifier FHIR endpoints speak STU3. Both the public endpoint:

and the project endpoint: [project]

This also means that you can only connect with client tools that speak STU3.


A feature that a lot of our users have been asking for is validation. All example STU3 resources will now have a Validation button. We will be adding new features to validation soon, like bulk validation, validation through the FHIR endpoint and validation in Gist.

Are we done?

Probably not. Making a platform ready for multiple versions of a standard is a tricky business. So we expect to mend a few bugs and make some improvements over the weeks to come. Use the feedback button if you think we can do better.

As FHIR registry builders, we are in the business of canonical URLS. Yours to be specific. Since all the base resources in the FHIR spec have the exact same URL in STU3 as they had in DSTU2, you can imagine that we had to practically have to build a wall between the DSTU2 and STU3 resources, to make sure they don’t start referencing to each other. Imagine validating or creating a snapshot using a base profile from the wrong version of the standard. You’re right. It can’t be done. So when you deal with STU3 we will filter out everything DSTU2.

Staying at DSTU2.
Of course you can stay in DSTU2 as long as you like. Simplifier will support DSTU2 for years to come. But remember that the FHIR spec moves forward and so will all of our new features. So if you can migrate to STU3, you probably should.

Moving your resources to STU3
The community has several tools to help you migrate your resources to STU3. Migrating your profiles is slightly more difficult, but there are options, besides doing it manually. And remember, before you upload STU3 resources, make sure you set your Simplifier project to STU3 first!

Happy profiling!