Author Archives: ewoutkramer

Versioning and canonical urls


At last week’s FHIR Developer Days in Amsterdam, we had a highly enjoyable break-out session on the use of canonical urls when taking versioning into consideration.

The issue had been popping up more often recently, and we, as the core team, had been pushing off trying to solve it until we had better understanding of the problem. But here we were: we had a room full of knowledgeable people and enough experience to take a stab at a solution.

For those not present, I’d like to summarize what we discussed, seamlessly turning into what I think needs to happen next.

The problem

Let me start off by giving you an overview of the problem.
The two key players in the game are the canonical url and the canonical reference. The canonical url is a special identity present on most of our conformance resources (StructureDefinition, ValueSet and the like). This identity is used by a canonical reference, present in StructureDefinition (e.g. StructureDefinition.base) and FHIR instances (Resource.meta.profile), which is used to refer to other StructureDefinitions (or any other conformance resource). For example, the StructureDefinition for the localized version of Address in Germany starts like this:

    <url value="" />
    <name value="address-de-basis" />
    <title value="Adresse, deutsches Basisprofil" />

Here, the <url> contains the canonical url of the address-de-basis StructureDefinition.

This canonical url can be referenced by other parts of your specification, as for example here at Patient.address in the StructureDefinition of the German version of Patient:

    <path value="Patient.address" />
    <short value="Adresse nach deutschem Profil" />
        <code value="Address" />
        <profile value="" />

This is quite comparable to what happens in your favorite programming language, and then referring to it when you are declaring an instance variable of class member:

public class GermanAddress

public class GermanPatient
    public GermanAddress Address { get; }

As a good member of the FHIR community, you have published all your profiles to a registry (these examples come from the German Basisprofil on Simplifier), and people are happily validating their Patient instances against it. Some of the instances may even have gotten tagged by a profile claim:

            <profile value="">
        <!-- rest of the Patient's data -->

Before long, this canonical reference gets baked into both instances in databases and software using your profile.

And then, the day comes that you are required to make changes to your profile. Breaking changes in fact. You are circling your brand new version of the profile amongst your colleagues, everyone is updating their test-instances and all works fine. Until you publish the new version on the registry. Once a copy of your StructureDefinition starts tickling down to servers around the country, previously correct instances of the German Patient will now start to fail validation. You realize you have broken the contract you had with your users: stuff that was valid before has now -without the end-users doing anything- become broken.

More subtly, if your breaking change was to the address-de-basis profile, this turns out to be a breaking change to all profiles depending on address-de-basis, which then proliferate up all the way to the instances.

A simple solution

It is easy to see what you should have done: you (and all your users) should have versioned both the canonical url and the canonical reference! So,

    <url value="" />


    <code value="Address" />
    <profile value="" />

and finally

    <profile value="">

This way, when we publish a new version we may change the canonical url and update it to end in v2, and all existing materials will remain untouched and valid.
Of course, minor changes are OK, so if we all agree to stick to semver, and only change our canonical url when a major version number changes, we’re doing fine. We formalized this approach in the STU3 version of the FHIR specification, by adding a <version> element to StructureDefinition:

    <url value="" />
    <version value="0.1" />

and then allowing you to tag canonical references with a | and a version number like so:

    <code value="Address" />
    <profile value="|0.1" />

Ok. Done. We published STU3 and hoped the problem was solved.

But it wasn’t. Well, technically, it was – but there are a few practical complications:

  • We had not followed this practice ourselves in the specification (leaving untouched for years across STUs), and neither did the most prominent early Implementation Guides (like those from Argonaut). We set bad examples that turned out to be the path of least resistance as well. Guess what happens.
  • It just feels wrong to hard-wire version numbers in references within a set of StructureDefinitions and Implementation Guides you author. If you publish them as a version-managed, coherent set of StructureDefinitions, it is obvious that you’d like to reference the version of the StructureDefinition published at the same time as the referring StructureDefinition in that same set.
  • If you need to bump the version in a canonical url in the set that you publish, you need to be really sure you update all references to it in that set. And then (as we saw above) update the canonical url of all referring StructureDefinitions, and so on and so on. If you fail to do this, you end up in a situation where part of your definitions are using one version and another part another version. Granted, we could find someone who thinks that’s a feature, but I am sure most would disagree.

Better tooling support for authors could ease this job and help sticking to versioned references, but I kept having the nagging feeling something was not right. This was strengthened by the fact that this is not how we commonly version in other development environments:
continuing our parrallel with programming concepts, versioning the canonical url would be comparable to versioning our class names:

public class GermanAddress2 { ... }
public class GermanName4 { ... }

public class GermanPatient2
    public GermanAddress2 Address { get; }
    public GermanName4 Name { get; }

It is not like we’ve never seen this before, but that’s really only done if you want to keep incompatible versions of the same class within the same codebase, because you still need access to both (e.g. to do mapping).

At the same time, we were looking at how users of Simplifier and authors of implementation guides organized their projects and how they wanted versioning to work. It turns out that StructureDefinitions simply do not live on their own, much like Java classes are not shared and distributed in isolation. They are authored and shipped in sets (like implementation guides), and are versioned in those sets. Of course, they may still use canonical references to materials outside the set, and you’d need to tightly version those references, but inside their set, they simply mean to point to each other.

You don’t need to look around long at how other parts of the industry have solved this to realize that we need the concept of a “package”, much like you would package up your classes and Javascript files into zips, jars or whatever and ship them as npm, maven or NuGet packages.

Packages to the rescue

If you are not familiar with these packaging mechanism, I’ll call out a few properties of packages, to see how they solve our problems:

  • Packages contain artifacts that are managed by a small group of authors, who prepare them as a consistent set and publish them as an indivisible unit under the same version moniker.
  • A package is published on a registry and has a name and a version, the combination of which is enforced to be unique within that registry. You cannot overwrite a package once it has become published.
  • Packages explicitly state on which version of which other packages they depend, and contain configuration information on how to handle version upgrades and mismatches. Additionally, the user of a packages may override how version dependencies are resolved, even enforcing the use of an older or newer version of the dependency when desired.

Packages are usually just normal zip archives with all artifacts you like to publish, with an additional “configuration file” added to the zip:

  "name": "",
  "version": 1.0.4,
    "": ">3.1.4"

This has considerable advantages for the users:

  • When a user downloads a package, it contains “everything you need” to start working with StructureDefinitions within it, instead of having to download individual artifacts one by one from a registry.
  • The user also has the confidence that these artifacts are meant to be used together, which is not obvious if you are looking at a huge list of StructureDefinition on the registry.
  • A package manager will ensure that if you download one package for use in your authoring environment or machine, all dependencies will also be retrieved, and you would not encounter errors due to missing references.

As an author of a package of conformance resources this means that you no longer need to version your canonical references: they are interpreted to mean to refer to the resources inside your package. This is even true for canonical references to StructureDefinitions and ValueSets outside your package, since you as an author explicitly declare these dependencies and version them at the package level and no longer at each individual reference. Upgrading to a new version of a dependency is now completely trivial.

It might even solve another long standing wish expressed by authors: you could configure “type aliases” within your package, stating that every reference to a certain core type (or profile) within the package is translated to a reference to another type. If you wonder why that is useful, consider the poor German authors who defined a German Address (address-de-basis) and then needed to go over all of their StructureDefinitions to replace references to the “standard” core Address with their localized version. It’s pretty comparable to redirecting conformance references to a specific version of a StructureDefinition, so we can solve this in one go.

Concrete steps

Looking at the situation today, I suggest we do the following:

  • Remove <version> from StructureDefinition, and obsolete the use of | in references (though hard version references might still have a use).
  • Decide what we need as a syntax to define packages and declare dependencies. We could leverage existing mechanisms (package.json being a prime candidate) or integrate it into the existing ImplementationGuide resource (which would enable existing registries to simply use a FHIR endpoint to list the packages).
  • Enhance the authoring workflow in the current tools to allow the authors to create packages when they want to publish their IGs, fixing the external dependencies to specific versions.
  • Create (or re-use) package managers to enable our users to download packages and their dependencies (and turn make our registry compatible with those tools).

Are we done? No. There are two more design decisions (at least) to make. Let’s start with the easiest one:

It is apparent that there is a relationship to our concept of an “Implementation Guide” and a package. And we have to figure out exactly what that relationship is. I feel an Implementation Guide is a kind of package, containing the conformance resources, html files, images, etcetera that form the IG. But this also means there will be packages that are not ImplementationGuides. If we decide that the ImplementationGuide resource is the home of the “configuration part” of a package, we will need to rename ImplementationGuide to reflect its new scope.

But I saved the most impactful consequence for last:

You can no longer interpret canonical references present in conformance resources or instances outside of the context of a package.

Let me reiterate that: any authoring tool, validator, renderer or whatever system that uses StructureDefinitions to do its work will need to know this context. Those who have carefully studied the current ImplementationGuide resource (especially realize this is already the case now, but most are blissfully unaware of this hidden feature.

For systems working with conformance resources (like instance validators), it’s likely they have this context: if you’re dealing with a StructureDefinition, you probably got it from a package in the first place (it becomes a different matter entirely if a resource can be distributed in different packages. Well, let’s not diverge.)

For servers exchanging instances however – we’d need to assume they know the context out of band. But this won’t do for partners exchanging outside of such a controlled environment. For this let me suggest to summon an old friend, hidden in a dark corner of the FHIR specification: Resource.implicitRules

The specification states:

Resource.implicitRules is a reference to a set of rules that were followed when the resource was constructed, and which must be understood when processing the content

that sounds about right, however, it continues:

Wherever possible, implementers and/or specification writers should avoid using this element.

And sparingly lists the reasons why we shouldn’t. I suggest we take a fresh look at this element and see whether the element we thought up so many years ago may find use for it after all.


We’re not done yet, and I admit I am not sure I’ve dealt with all the consequences of this versioning solution here, but it has one thing going for it: we’re profiting from solutions thought up by the rest of the industry who have dealt with this before. But is an exchange standard and it’s artifacts completely comparable to a software package? Will we be able to fix the problem of communicating package context?

I don’t know yet, but the more I think about the uses of packages and how it eases the life of authors and users of implementation guides, the more I think we should pursue this a bit in online discussions. Hope they are as engaging as the one at FHIR DevDays!

Validation – and other new features in the .NET API stack

During the long and rainy summer here in Amsterdam our team has been working hard to expand the .NET API to support more advanced usecases like full profile validation. Though validation may be The Most Wanted Feature for many of you, it is certainly not the only exciting new feature we like to showcase, so I’ll give you a brief overview on what’s coming up in the newest version of the .NET API in the sections below.

Poco-less parsing

We all love the (generated) .NET classes like Patient and Observation to manipulate and create FHIR data, including the parsers and serializers that turn them into xml and json. In fact, this is what the .NET API started out with some years ago. There is a growing class of tools however, that only need a subset of information, and for which the all-or-nothing nature of working with POCOs is a hindrance. Some of the reasons you may want to start working with poco-less parsing are:

  • Performance Our generic FHIR REST server Spark does not really need to parse data into POCOs just to index and store the data into its internal format.
  • Flexibility Tools may want to work with partially incorrect or incomplete resources, which would never be parseable into the generated classes. Our profile editor Forge, for example, would gladly try to read StructureDefinitions that are incomplete or incorrect so the author can correct and save them using that tool.
  • Version independence You may like to work with FHIR data from multiple FHIR versions, and write code that deals with the differences. With the POCO classes you would commit yourself to a specific FHIR version. Using Poco-less parsing allows you to do that. This will be used by the registry tool Simplifier to support profiles in both DSTU2 and STU3.

The central abstraction for working with FHIR data without using POCOs is IElementNavigator, which includes a set of extension methods to do LINQ-to-Xml-like navigation on the data:


Many of the newers parts of the .NET API are built on top of this interface and since implementing IElementNavigator is pretty straightforward, you could implement this interface top of your own (proprietary) model and have it participate in the functionality the .NET FHIR stack offers, like validation. Out of the box, we provide an implementation for POCOs, XML/Json and probably RDF.

FhirPath evaluation

In the meantime, work has been going on on a HL7-designed navigation and extraction language called FhirPath (formerly known as FluentPath). FhirPath looks a lot like XPath and can be used to walk trees, extract subtrees, formulate invariants, just like XPath. In fact, almost all XPath invariants in the spec have now been replaced by their FhirPath equivalent.

In parrallel, we have been adding a FhirPath compiler and execution environment to the API. It’s built on top of IElementNavigator described above, so you could now say:

IElementNavigator instance = XmlDomFhirNavigator.Create("...");
var names = instance.Select(" = 'Ewout')");

Since we’ve also implemented IElementNavigator on top of the POCOs, you can now quickly select any part of an in-memory instance or run FhirPath invariants on top of POCOs.

Underneath, a FhirPath compiler will turn these FhirPath statements into ordinary .NET lambdas, so if you execute the same statement multiple times, you’ll be running native lambdas, not re-interpreted strings.

Retrieving conformance resources

Most of us will sooner or later need the metadata for Resources, called the conformance resources in FHIR. These allow you to get information about which elements are members of a given resource type, which datatypes exist, cardinality of elements etcetera. The .NET API has a new abstraction (based on previous work that’s been around for a while), the IResourceResolver.

It’s main method is ResolveByCanonicalUri() and it will get you the conformance resource (StructureDefinition, ValueSet, OperationDefinition etc) with a given uri. All core datatypes have uri’s like, but these could of course also be more specific profiles, like Argonaut’s ‘DAF Patient’.

We have provided implementations for locating these resources within a zip file (like that is available on the FHIR website) and within a directory where your application is running. Resolvers can be cached and combined, so you can have a resolver that first tries your local directory, then the standard zip, then goes out to the web.

This is how you would locate the core profiles for, say, Patient:

IResourceResolver zipResolver = ZipSource.CreateValidationSource();
StructureDefinition pat =
as StructureDefinition;

Of course, you can write your own implementations for these interfaces. We did so for the registry Simplifier, where we needed an IResourceResolver that resolves uris to resources using the database of profiles present in the registry.

These abstractions are used by the validator and the terminology modules to retrieve referenced profiles and valuesets when they are encountered.

Navigating StructureDefinition’s hierarchy

If you have worked with StructureDefinition, you know the pain of dealing with its flat list of Elements: even though the StructureDefinition expresses a (deeply) nested tree of elements, these are represented as a flat list (with hierarchical paths). You’ll generally need a lot of smart string manipulation to just find “the next sibling” for an element, or move back to its parent. To avoid this kind of code creeping all over the place in the API, we developed an internal class that we have now made public, the ElementDefinitionNavigator. It’s highly optimized for speed and low-memory use so you can now move around the hierarchy expressed by the StructureDefinition with ease:

StructureDefinition def = zipResolver.FindStructureDefinitionForCoreType(FHIRDefinedType.Patient);
ElementDefinitionNavigator nav = new ElementDefinitionNavigator(def);


Creating Profile snapshots

Maybe not a feature everyone will need, but if you are authoring profiles you will need at some point to turn your changes to a profile (“the differential”)  into a snapshot, that can be used as input for e.g. rendering and validation tools. This is a pretty complex job, and Grahame Grieve, Chris Grenz and our own Forge developer Michel Rutten have been busy to make sure our tools can read each others outputs. Mind you, that means long nightly sessions on the subtleties of merging differentials into bases – but Michel packed it all in the new SnapshotGenerator. It builds on the resolver and ElementDefinitionNavigator above to do its work and using it looks deceptively simple:

StructureDefinition def = myResolver.FindStructureDefinition("");
var gen = new SnapshotGenerator(myResolver);

// Regenerate the snapshot for the StructureDefinition

And yes, it will handle your re-sliced re-slices.

Terminology services

Terminology is a vast and complex area, and we have no intention to provide a full-fledged terminology server in the API, but there’s now at least ITerminologyServer and its lightweight, in-memory implementation LocalTerminologyServer. It supports ValueSets with define and compose statements, however it does not support filters. It’s designed to handle most common uses of ValueSet, but will happily throw NotSupportedException if you push it too far or you reach a certain maximum number of concepts in your expansion.

The LocalTerminologyServer depends on two pieces of new functionality:

  • The `ValueSet` class now has methods to search by code/system in an expansion
  • There is a new class ValueSetExpander, which would expand a valueset in-memory (with the caveats mentioned above)

Most likely we will still add another implementation called MultiStrategyTerminologyServer that would first use this local server, and when that fails, reach out to a real terminology server and use the FHIR REST Terminology operations to get what it needs.


That brings us to the most interesting addition for most people, the validator. It’s been a long-standing wish to add this to the .NET API, but thanks to support from UK’s NHS Digital, we have been able to work on this as well.

If you have been reading about the new features above, you can see these are all pillars that make validation possible:

  • A flexible poco-less way to efficiently read any kind of instance FHIR data (from file, xml, json, memory, database) – the `IElementNavigator`
  • A way to retrieve StructureDefinitions and other conformance resources to validate the instance against – the `IResourceResolver`
  • Generate snapshots for these definitions that serve as input to the validator
  • Be able to run constraints formulated in FhirPath
  • Be able to validate bindings using `ITerminologyService`

Mix these together, add some smart validation logic and you get the Validator.

It is not completely done yet -of course we left slice validation till the end- but it does most other constraints, including binding and simple extension validation. It handles Bundles and aggregation modes as well.

Learning more

While writing this blog I realized how much functionality we have added, and of course I had to leave lots of details out of the description above. Best way to get started with it is by running the validator demo and looking at the code and unit-tests. No, indeed, we have no online documentation about all of this yet. Another great way to learn more is to visit the upcoming HL7 FHIR DevDays where I will have a session on these (advanced) uses of the FHIR .NET API.

For now, let me get back to my Visual Studio and try to get it all polished and finished up for you.

The FHIR is strong in these ones

Most of you who attended the HL7 Atlanta WGM or the HL7 FHIR DevDays in Amsterdam have taken the opportunity to get photographed in a Jedi setting: hold a real lightsaber and look as fierce as you can.

We are now selecting the best pictures and compiling them into the FHIR Calendar 2016, which will be available in January: 6 pictures from the Atlanta WGM and 6 pictures from the DevDays. Soon you will know whether you are part of the (2016) FHIR history!

By the way, if you missed your chance to be on the calender, we’ll be organizing a new edition of the FHIR Developer Days in Amsterdam on the 16-18th of November 2016!

Here’s a preview of the Furore team, most of them working on FHIR community projects like our Profile editing tool Forge and the Profile registry Simplifier:


The new DSTU2 Operations framework

As you probably know, FHIR lets you extend its datamodel using extensions, thus enabling you to add application and usecase specific data to the “core” datamodels. You can publish the details about these extensions on a FHIR server or repository, so people can find out what your extension means and what kind of data it allows.

In DSTU2 we introduced another extension mechanism: the Operations Framework. This framework allows you to add new operations to the FHIR REST interface, so you can add application-specific functionality that your clients can call. And just as with extensions, you can publish details about these operations on your FHIR server so others can find out how to invoke the operation.

But what exactly is an “operation” in the context of FHIR’s rest interface? In DSTU1 we were not particularly clear about this and we simply had a list of interactions, which include things like “read”, “search” and “validate”. In addition, DSTU1 offered a search parameter called “_query” that took a name of a custom “query”. Interestingly enough, the only “query” defined by the specification was the valueset expansion, which triggered amusing debates whether this actually qualified as a query. Moreover, we had the need to add operations that had side-effects and which took parameters of complex types. All of which was not possible with the “_query” extension point. This resulted in the addition of explicitly defined operations.

When you are invoking actions on a FHIR server, you are now either using the basic core “interactions”, which are just the CRUD operations, search and history. Anyhting else is an operation. The specification comes with certain pre-defined operations and it will let you define new ones by your own design.


Invoking an operation

Operations are (mostly) POSTs to a FHIR endpoint, where the name of the operations is prefixed by a “$” sign. For example:


Whenever the operation is idempotent it may be invoked using GET as well (as is the case with the example above).

Operations can be invoked on four types of FHIR endpoints:

  • The “base” FHIR service endpoint (e.g. – these are operations that operate on the full scale of the server. For example: return me all extensions known by this server.
  • A resource type (e.g. – these operations operate across all instances of the given type
  • A resource instance (e.g. – for operations that involve a single instance, like the $everything operation above
  • A version of a resource instance ( – which, as you can guess by now, is used by operations that involve a specific version of a specific instance of FHIR data. These are a bit exceptional in the sense that they should not alter instance data (since that would require creating an update) and were specifically introduced to allow manipulation of profile and tag metadata (which is allowed without creating a new version).

The body of the invocation contains a special infrastructure resource called Parameters (yes, plural). This Resource represents a collection of named parameters as <key,value> pairs, where the value may be any primitive or complex datatype or even a full Resource. In addition you may pass in strings that are formatted as the search parameter types.

On completion, the operation returns another Parameters resource, this time containing one or more (!) output “parameters”. This means that a FHIR operation can take any parameter “in” and return a set out result parameters “out”. We chose to introduce this special Parameters resource so both the body of the POST and the returned result are always a Resource. Had we not done this, we would have needed to expand the FHIR interface and FHIR serialization to allow for bodies that were just a complex datatype or primitive.

Publishing an operation’s definition

Invoking an operation is just the first part of the story. To make your new operations discoverable you will use the new OperationDefinition resource. Just like with extensions this is a computable expression of the definition of your operation, containing some metadata and details about the allowed parameters and result valies. Since the OperationDefinition is itself a resource, you can publish it in your FHIR server endpoint or a FHIR repository. Developer tools can take this definition and generate a strongly-typed API interface for it, and -just like our publication tool does– you may use it as source to generate documentation for your new operation.

The DSTU2 core specification comes with quite some pre-defined operations, and you will find some of the old DSTU1 interactions that your thought were missing in DSTU2, now showing up as an operation:

  • Validation ($validate) – this replaces (and expands on) the validation functionality of the old _validate interaction in DSTU1. You can send in a Resource (or a reference to a resource) and a reference to a StructureDefinition to get your resource validated.
  • Tag operations ($meta, $meta-add, $meta-delete) – replace the tag operations in DSTU1 to add profile, tag and security tag metadata for a resource instance.
  • The Mailbox endpoint ($mailbox) – Delivers a message to a FHIR server.

In addition, the specification now has a full set of ValueSet operations, which turn a FHIR server into a terminology server, offering limited CTS capabilities.

Finally, you can find all operations defined for a Resource on the new “Operations” tab that was added to all Resource pages

I want to be a student at Heilbronn

Last week, at the end of the FHIR DevDays, we offered participants a podium to demo their prototypes built or tested during the hackathon part of the DevDays. While there were surprisingly many interesting entries, one in particular struck me: a demo by Simone Heckmann, which showed a message broker turning HL7 v2 messages into FHIR and posting them to a server.

Now, Simone teaches at Hochschule Heilbronn, and she told us that building this message broker is actually part of a student project at the Hochschule. That’s right: Simone uses FHIR to teach her students about interoperability and show them the caveats and real-life problems involved in building connected systems. And that’s only part of her teaching curriculum; in addition to have them map one type of messages to another standard, she also asks her students to select any of the available open-source FHIR clients and servers, play with them for about a month and extend them. And this is just the prelude to the final part of the teaching program: she then organizes a hackathon at the Hochschule where the students bring their pet projects they have been working on and test them against each other.

By the time she reached this point in her story is was almost sure I had misheard. A teaching facility where students not only learn the background on actual interchange standards and terminologies, but where current standards and software are used to get hands-on experience with them and be confronted with the reality of having your software speaking to someone else’s!  Where was she when I was a student?  Okay, “interchange” back then meant calling servers using CORBA and writing IDL, but still!  I know quite some about current teaching methods and I frequently talk to people coming straight out of (medical) IT universities, this is the first time I have heard about a teaching program that takes practical teaching to this level.

As you can understand from my perspective as a FHIR co-designer, I love the idea of combining teaching with bringing practical skills to future interface developers and I am delighted that one of the central tenets of FHIR – focus on implementability and practical use – has found such good use in teaching about interoperability. In fact, I love the idea so much that I’d like to see how we can bring some of the student projects to the Developer Days next year.

See this as a warning to the other participants of next year’s edition of the DevDays: you will have to face the projects from these fresh minds from Heilbronn, showing what you can do with FHIR in just a single semester of teaching. Good luck!

Oh, how I would have loved to be a student at Heilbronn!

Update: I would have become a co-student with Grahame, see Grahame’s blog

Profiles in FHIR

As I said last week, most of my work currently centers around implementing FHIR’s Profile functionality. It’s a very important bit of the FHIR standard, but it is -as yet- invisible and hard to grasp for many. As well, I know from my tutorials that many who are new to FHIR do not know what purpose Profiles serve, and where they fit in the standard. I’d like to shed some light on that today.

First off, I want to show you the little “proverb tile” that both Grahame and we at Furore have on our walls:

Ontology / Mapping : Thorough, Deep, Authoritative
Exchange : Simple, Concise, Modular
Conformance : Capable, Comprehensive, fine-grained

These are the three basic pillars of FHIR. It’s worth discussing them one by one to see how and where Profiles fit in.


The currently most well known is the “Exchange” bit. It specifies “how we exchange data”, so this part of the spec is about:

  • What does the data look like: These are the XML and Json formats
  • How do we exchange data: the Http-based REST and Search specification.
  • How is data composed into useful exchanges: Bundles, Documents and Messages.

The Exchange bit of FHIR is one of the most well-known and oldest (relatively speaking of course) bits of the specification and the most stable as well. We’ve tested it during seven connectathons and we’re seeing quite some support for it in API libraries and new products. It’s not completely ready yet, as we are working on new interaction patterns, like notifications and more powerful querying.

As you can see on the tile, we wanted this part to be “simple”: everyone who implements a FHIR exchange will need to be familiar with this part of the spec -from health IT veteran to freshly schooled app developers- so it better be simple.


While the Exchange tells you how to exchange data, the Ontology part tells you what you can exchange. At it’s most simple, these are the core data models we provide with the specification, better known as FHIR’s Resources. But it does not stop there. FHIR had to find a solution for a simple fact of life for standards: no matter how many data elements you add, you will never cover everyone’s needs. Conversely, no matter how concise you try to be, there will be too much complexity for some.

So FHIR adopted the Profile: a computer-readable statement that allows you to simplify or augment parts of the standard datamodels. In practice: as soon as you start using FHIR in your organization, country or any other context of exchange, you will produce a Profile for that context in which you:

  1. specify which parts of the core models are of no use to you and may not be used. Remember, that removing flexibility you don’t use makes it easier and less expensive for your exchange partners to implement the standard.
  2. add data elements to models that are not part of the base specification but that you need and are specific to your usecase. Adding your own elements for use with your exchange partners is considered a “normal” thing to do in FHIR, even if this means the data can only be interpreted by you and your business partners.

These Profiles, combined with narrative that explains the business context of your exchanges, information about your organization’s or country’s architecture and service endpoints, security, etcetera will form what is otherwise known as an implementation guide for FHIR.

As a developer, you could think of these Profiles as a mechanism comparable to an XSD schema. It can be used to validate incoming messages and see whether they do in fact conform to the agreements you made with your exchanging partners. But it’s functionality is broader: it serves to document the way the standard is used and it has additional formalisms that XSD cannot support. For this reason, we can derive XSD schemas and schematrons from Profiles, but schemas and schematrons alone don’t cover ground for what we need in the Ontology department of FHIR.

There’s no denying that Profile is a complex beast (this is the “deep” part of the proverb tile), and if you look at its documentation you’ll see it is a big data structure with a lot of documentation. Even so, the documentation we currently have is incomplete and probably partially impenetrable without having one of us sitting next to your desk. I’ll try to fix some of that in future posts!

The Profile is not the only inhabitant of the Ontology, other notable partners are ValueSet and ConceptMap. They will be joined in DSTU2 by Namespace and DataElement.

Much work still needs to be done here. We are busy writing a modeller-friendly authoring tool for FHIR (called Forge) and both Grahame and I are writing support libraries for Java and .NET to do the heavy lifting of working with Profiles (validation, using their (meta)data in your server, graphical display). I am sure this will be an enjoyable summer, and we are planning to test the fruits of our work in the upcoming September 2014 FHIR connectathon in Chicago.


Finally, conformance. This is a long blogpost already, so I will not spend too much time on it, but Conformance will complete the trio by allowing servers to specify which parts of the FHIR specification (and your profiles) they know about and support. As well, FHIR clients can use Conformance to dictate which parts of FHIR and which profiles they need a server to support for them to be able to work with the server. Since Conformance is a computable, computer-readable specification (in fact it is a Resource, just like Profile), you could write functionality to:

  • Compare a server’s Conformance with a client’s Conformance statement and determine wether they can cooperate
  • Read a server’s Conformance and do certification testing based on what the server promises it can do
  • Publish a Conformance statement for certain common sub-sets of FHIR functionality (e.g. “FHIR light”, “FHIR for the US”, “FHIR for organization X”).

Conformance will receive much more attention when FHIR gets more mature, especially in the area of automated conformance testing.


FHIR is a big building, and so far we have mainly discussed the “Exchange” bit. The core team is now focusing its energy on the “Ontology” part and you will see extensions to the reference implementations to help you work with it. As well, we are well aware the current documentation is hard to follow unless you were there when we wrote it (or as Bjarne Stroustoup once put it: “does not try to insult your intelligence”). We’re working on it, and I’ll spend my next posts discussing parts of Profile that deserve some quality time!

Metrics in FHIR, part 1

(This is a guest post by my colleague Martijn, who is implementing FHIR Profile validation for the .NET reference implementation)

As reference implementers of FHIR, we try to fully understand the possible difficulties that future FHIR implementers would encounter. When we can, we try to build the full implementation, including the optionals. We did full monty with the implementation of storing and searching for quantities. Which was not an easy but very interesting path, which I will try to tell you all about in this and the next to blog posts.

To get you up-to-speed, let’s start with a small recap: Quantity is one of the core data types in FHIR. It is used to communicate and store physical measurements. Doctors do it. Nurses do it. And especially laboratories do. A quantity basically consists of a value and the unit that the value expresses. For example [3 liter]. So far so good. Doesn’t look too difficult. The truth is both the value and the unit have a their share of serious challenges.

A unit is not simply a unit, but actually a compound of units with prefixes that have multiplications and  divisions. For example pressure: Kilo Pascal per square meter (kPa/m^2). It gets more complicated: in the US this measurement would be expressed in pounds of force per square inch. The measurement is the same, but value is completely different. Yet in the end we do want to compare them.

As well, a conversion of 1 kilo to 1,000 gram is not a mere multiplication by 1,000. Because a measurement has precision, 1 Kilo Pascal is not the same as 1,000 Pascal. It’s the same as 1e3 Pascal (or 1 times 10 to the power of 3), and the value is 1000 times more precise than 1e3.

In the next two separate posts I will tell you how we dealt with the units and the value issue. If you can’t wait to check it out, take a look at the Metric library, right here