Category Archives: .NET


Forge 14.4 technical overview


Last Friday (November 25th) we published a new Forge release, Forge 14.4 for DSTU2 – FHIR DevDays 2016 edition. You can download the new Forge release for free from The previous Forge release (13.2) dates from June 10th, so clearly it’s been a while since we’ve published an update. In this blog post I’ll elaborate on the new features.

Profiles on profiles

Profiles created in Forge 13.2 and earlier always express a set of constraints on a FHIR core datatype or resource. However the FHIR specification also allows you to author so-called profiles-on-profiles or derived profiles. A derived profile represents a set of constraints on another, existing (non-core) base profile. This allows you to define a hierarchy of profiles that are based on each other.

In a typical situation, a national standards organization publishes a set of official national FHIR profiles defining some common country-wide constraints. Organizations within this country create and publish their own profiles that are derived from the official national profiles. For example, the Dutch HL7 affiliate publishes a national Dutch Patient profile with a specific constraint on Patient.identifier for the Dutch social security number (BSN). Dutch organizations can now author and publish their own, more specialized profiles with additional organizational constraints based on (derived from) the national profiles. Because different organizational profiles all conform to a common set of national profiles, they provide a certain well-defined level of interoperability within the country.


Hierarchy of Profiles

The rules for derived profiles are similar to the rules for profiles on core datatypes or resources. A profile is valid if all resources that conform to the profile also conform to the associated base profile. This implies that a profile is only allowed constrain the base profile. A profile is not allowed to relax/remove constraints defined by the base profile.

Note: In a sense, we can interpret a profile as a filter or predicate on the resources space, cf. a WHERE clause in a SQL expression. The number of constraints that a profile expresses is inversely proportional to the total amount of conforming resources. As derived profiles become more and more specific, the total set of conforming resources becomes smaller and smaller. The extreme case being a profile that matches no resources at all; such a profile may very well be valid according to the FHIR spec, albeit meaningless. Similar to the SQL predicate WHERE 1=0 which is perfectly valid but will always evaluate to an empty set.


In Forge release 14.4, you can now create and edit profiles based on other (non-core) profiles. The command New Derived Profile creates a new derived profile based on an existing base profile on disk. The initial implementation is still somewhat limited and provides support for fairly simple base profiles (e.g. without slicing). Future updates will add support for more advanced scenario’s such as reslicing.

Resolving resources

Profiles created in previous Forge releases until version 13.2 were always based on a core datatype or resource definition.The Forge application directory contains an archive “” that contains the official definitions of all FHIR core datatypes and resources. Upon application startup, Forge scans the ZIP archive and loads all the available definitions. When creating a new profile, Forge extracts the selected core base definition from the ZIP file and initializes a new profile from the selected base.

In the new 14.4 release we’ve added support to create and edit profiles based on other existing profiles. Forge now needs to resolve the custom base profile from a custom location, as it can no longer be resolved from the default ZIP archive that contains only the core definitions.

A profile can also express constraints on element types via external profiles. For example, a Patient profile may introduce a custom Identifier profile on the Patient.identifier element. In this case, Forge also needs to resolve the specified external profile in order to dynamically generate user interface components for profile child elements and verify the specified constraints.

The same holds for extension elements in a profile. Each extension element is mapped to an external extension definition. Forge needs to resolve the specified extension definitions in order to generate UI and verify constraints.

Forge should be able to resolve external profiles from the containing (working) folder on disk, from a separate external directory, from a FHIR server or from the FHIR registry To implement this, Forge requires a generic and flexible system to resolve external (user defined) profiles on the fly, separate from the standard core datatype and resource definitions.

Fortunately (and not coincidentally…) the FHIR .NET API library provides a flexible and efficient resolving mechanism based on the generic IResourceResolver interface, as explained by Ewout in his recent article Validation – and other new features in the .NET API stack. The library exposes default implementations that support resolving resources from disk and online. The library also exposes convenient decorators for caching and resolving from multiple sources based on priority. Forge 14.4 now leverages the FHIR .NET API to implement a flexible multi-step mechanism for resolving external (conformance) resources.


Resource Resolvers

Upon opening an existing profile in Forge 14.4, the application creates an associated resolver that is bound to the containing folder on disk (including subfolders). The resolver provides cached access to all external profiles in the same folder structure. Future updates will introduce support for the configuration of additional sources, e.g. on local disk or online.

Note: for now, please keep related profiles together in a common folder and Forge should find them. Be aware that the resource resolvers cannot resolve profiles with duplicates, i.e. multiple profiles in the same folder that share a common canonical URI. When Forge detects duplicate profiles, it will display an error message and list the conflicting files. In that case, manually delete/move all the obsolete duplicates from the folder and retry.

The current Session Explorer in Forge provides a conceptual/logical view of resources and resource containers. However the new resource resolving approach introduces a new folder-based workflow that is fundamentally different from the earlier approach and does not fit well with the current Session Explorer. Therefore the Session Explorer will soon be phased out in favor of a new folder-based approach (cf. Visual Studio Solution Explorer).

Snapshot generation

One of the largest changes in the Forge 14.4 release involves a complete new implementation of the logic involving the generation of snapshot and differential components. Let’s take a closer look at these concepts.


Snapshot Spectacles

A FHIR profile defines a structured set of elements. Each element definition has a set of properties. The combination of all the element definitions and associated properties uniquely defines a specific datatype or resource structure. This holds for both a core datatype or resource definition, as well as for a (derived) profile. FHIR defines this complete representation as the so-called “snapshot” component. The official name might be a bit misleading in that it suggests a captured state at a specific moment in time; while this is not incorrect, the key aspect of the snapshot component is that it intends to provide a complete and full representation of a certain data structure. An application that receives a profile snapshot has all the available information and does not need to resolve any external profiles.

A snapshot component can be quite large (~ thousands of lines of XML/JSON). And for a derived profile, the snapshot component contains a lot of redundant information: complete definitions of all the unconstrained elements and properties that are inherited from the base profile. Provided an application has access to the base resource, then it is possible to (re-)generate all the inherited redundant information. So we can also represent a profile by only describing the actual profile constraints and omitting all the information that is implicitly inherited from the base profile. Such a sparse representation of profile constraints is defined by FHIR as the “differential” component. The differential component only expresses the actual constraints with respect to the associated base profile. Generally, the differential component is (much) smaller than the snapshot component. Also the differential provides a convenient serialized representation for profile authors to browse and inspect, contrary to the snapshot component that contains a lot of redundant noise which makes it difficult to navigate.

The FHIR specification requires that a profile should contain either a snapshot component or a differential component or both (invariant sdf-6). A serialized profile with only a differential component has a smaller size and is therefore cheaper to persist and transmit. However this puts the burden on the receiving side to (re-)generate the snapshot component from the associated base profile. A serialized profile that contains a snapshot component is much larger but also self-sufficient; it contains all the available information and the receiver does not need access to any external profiles.

Forge releases up until version 13.2 contained custom application logic to generate the snapshot component from the differential. The old logic was fairly simple but useful as long as Forge was limited to profiles on core datatypes and resources. However in order to implement support for profiles on profiles (a.k.a. derived profiles), the existing snapshot generation logic was no longer sufficiently capable and needed to become a lot smarter. As we now had to completely re-implement snapshot generation anyway, we decided to move the logic out of Forge and into the open source FHIR .NET API library.

The implementation of full-fledged snapshot generation proved to be quite an undertaking. During the process we discovered that some low-level aspects were not yet clearly and/or fully defined by the FHIR specification. So we cooperated with the FHIR core team and profiling community to fill in the blanks and complete the missing parts of the specification. Especially Chris Grenz‘s contributions proved to be invaluable – Kudos! Working closely together with Chris, who works at Mayo Clinic, we managed to define how to unambiguously express some remaining advanced scenario’s and subsequently to align both our implementations.

Unfortunately the snapshot component is not completely deterministic, e.g. element id’s may vary, list order may vary etc. So FHIR cannot and does not define a “canonical” snapshot format that would allow us to perform a byte-wise or node-by-node comparison of two snapshot versions. Therefore a snapshot comparison algorithm needs some custom logic to handle special FHIR rules.

Although snapshot generation certainly isn’t the sexiest topic within the FHIR spec, it is a crucial aspect for any system dealing with profiles (including validation). And feature-complete snapshot generation proved to be quite challenging to implement. For example, in order to generate snapshots for core type definitions, the algorithm must be able to detect and handle recursive type relations, e.g. and Extension.extension. Also the correct implementation of different scenario’s involving (re-)slicing proved to be non-trivial. So we’re quite pleased that this fundamental part of the FHIR specification is now clearly defined and completely implemented (barring some unknown unknowns…). If you’re interested in the implementation (you are definitely a FHIR nerd like us and) you can inspect the Hl7.Fhir.Specification component source on Github. Fortunately usage is very simple:

IResourceResolver resolver = ...
StructureDefinition structureDef = ...
var generator = new SnapshotGenerator(resolver, settings);

The snapshot generator uses the specified IResourceResolver instance to resolve references to external profiles and/or extension definitions. If this argument equals null, then the generator automatically falls back to the set of FHIR core datatype and resource definitions. The optional settings parameter allows the caller to specify some custom configuration options for the generator. is not yet capable of automatically (re-)generating the snapshot component. Currently, when you publish a profile, you have to explicitly submit the snapshot component in order for to render the complete structure. Note that if you publish a profile to Simplifier from within Forge, the application automatically handles this. Once the new release of the FHIR .NET API is ready and stable, we will update and integrate the new snapshot generator. From then on, Simplifier will be able to accept profiles with only a differential component and automatically (re-)generate the snapshot. Our team is also working on implementing a dynamic differential rendering in Simplifier, so the user can activate a filtered view of only the differential profile constraints.

Differential generation

We’ve seen how snapshot generation transforms a sparse tree of constraints into a complete representation of the target structure. This functionality is now provided by the FHIR .NET API library, in the form of an atomic operation (cf. pipeline). Of course, there also exists an inverse transformation that generates a sparse differential component from a complete snapshot representation of a structure. The differential representation only contains nodes that are constrained with respect to the base resource. Unconstrained nodes are excluded from the differential.


Differential Structure

Forge generates the differential representation on-the-fly and keeps it in sync with the profile while it is being edited. This approach is completely different from the snapshot generation. Each element definition and property in a user profile is associated with the corresponding node in the underlying base profile. This allows Forge to determine which profile nodes are actually constrained (not empty and not equal to base). To no surprise, the process to find the corresponding nodes in the base profile turns out to be quite involved, esp. for slicing.

Originally, up until version 13.2, Forge contained custom logic to resolve corresponding nodes in the base profile. The original logic was limited to handling only fairly simple profiles, esp. core datatype and resource definitions and was not very efficient on memory usage. As the application developed further, we were slowly stretching the limits of the original algorithm. Users started reporting unpredictable behavior (i.e. differential was not always properly synchronized) that was hard/impossible to fix. Eventually this was no longer maintainable. So we decided to also completely rewrite the differential generation logic.

Fortunately, the new snapshot generator logic in the FHIR .NET API library already needs to resolve corresponding nodes in the base profile anyway (because each differential constraint is merged onto the corresponding base node). So Forge does no longer have to perform that same complex exercise itself. Instead, Forge now receives all the base profile associations as a useful by-product of the snapshot generator. This is quite efficient and increases the loading speed of a profile. Also we no longer need to maintain and harmonize separate but similar implementations in Forge and the .NET API library, as all common and reusable logic has now been moved into the API.

Since each node in the loaded profile is now associated with the corresponding node in the base profile, it is now trivial to determine which nodes are actually constrained with respect to the base resource. Only constrained nodes are actually included in the differential. Unconstrained element properties (either empty or equal to base) are set to null in the underlying PoCo. Also complex nodes and element definitions having all child properties equal to null are cleared in the PoCo and their state is automatically re-evaluated whenever a child node has changed. This way, all unconstrained nodes are effectively excluded from the differential component. The whole process is quite efficient as after a change, only the affected nodes are re-evaluated.


Generate Differential

The serialized xml representation is (re-)generated in the background and rendered whenever the user activates the Xml tab. This ensures that the displayed xml is always up to date without impeding the application performance. The Forge UI only displays the differential, by design, in order to save memory. By default, profiles are also saved to disk with only the differential component. The Options menu provides a configuration setting to toggle the serialization of snapshots components when saving profiles to disk. Needles to say that profiles with snapshots are significantly larger, therefore in Forge the snapshot option is disabled by default.


This article is way too long… hats off if you managed to read and process all this information. And I still feel like I just scratched the surface. Time permitting, maybe I’ll author another article about the specifics of the snapshot generator implementation, for those few FHIR die hards that care (the usual suspects).


Sparse Tree

Looking back to 2016, the Furore FHIR team has invested a lot of time and effort in (re-)designing and (re-)implementing the fundamental infrastructural components of a FHIR ecosystem, thereby laying the groundwork for a plethora of useful productivity-enhancing features that we can now start building and enhancing in 2017. I must confess that I really enjoyed working  on some of the hard core abstract logic. But the nerdy stuff can also be quite demanding, time consuming and eventually goes at the expense of basic social behavior and interpersonal relations… So I’ve learned a long time ago that, even though I take joy in higher abstract thinking, I shouldn’t bury myself in it for all too long, as there is a personal price to pay. Now the Forge user interface desperately needs some love – as well as my estranged girlfriend, who’s totally fed up with me rambling on and on about snapshots and such. So I am eagerly looking forward to next year, when I can focus again on improving usability, workflow and my personal love life. But first let’s celebrate the holidays, decorate a tree, spend some quality time with family and friends and hopefully regain some of those long lost social skills.

Happy profiling!

Michel Rutten

Validation – and other new features in the .NET API stack

During the long and rainy summer here in Amsterdam our team has been working hard to expand the .NET API to support more advanced usecases like full profile validation. Though validation may be The Most Wanted Feature for many of you, it is certainly not the only exciting new feature we like to showcase, so I’ll give you a brief overview on what’s coming up in the newest version of the .NET API in the sections below.

Poco-less parsing

We all love the (generated) .NET classes like Patient and Observation to manipulate and create FHIR data, including the parsers and serializers that turn them into xml and json. In fact, this is what the .NET API started out with some years ago. There is a growing class of tools however, that only need a subset of information, and for which the all-or-nothing nature of working with POCOs is a hindrance. Some of the reasons you may want to start working with poco-less parsing are:

  • Performance Our generic FHIR REST server Spark does not really need to parse data into POCOs just to index and store the data into its internal format.
  • Flexibility Tools may want to work with partially incorrect or incomplete resources, which would never be parseable into the generated classes. Our profile editor Forge, for example, would gladly try to read StructureDefinitions that are incomplete or incorrect so the author can correct and save them using that tool.
  • Version independence You may like to work with FHIR data from multiple FHIR versions, and write code that deals with the differences. With the POCO classes you would commit yourself to a specific FHIR version. Using Poco-less parsing allows you to do that. This will be used by the registry tool Simplifier to support profiles in both DSTU2 and STU3.

The central abstraction for working with FHIR data without using POCOs is IElementNavigator, which includes a set of extension methods to do LINQ-to-Xml-like navigation on the data:


Many of the newers parts of the .NET API are built on top of this interface and since implementing IElementNavigator is pretty straightforward, you could implement this interface top of your own (proprietary) model and have it participate in the functionality the .NET FHIR stack offers, like validation. Out of the box, we provide an implementation for POCOs, XML/Json and probably RDF.

FhirPath evaluation

In the meantime, work has been going on on a HL7-designed navigation and extraction language called FhirPath (formerly known as FluentPath). FhirPath looks a lot like XPath and can be used to walk trees, extract subtrees, formulate invariants, just like XPath. In fact, almost all XPath invariants in the spec have now been replaced by their FhirPath equivalent.

In parrallel, we have been adding a FhirPath compiler and execution environment to the API. It’s built on top of IElementNavigator described above, so you could now say:

IElementNavigator instance = XmlDomFhirNavigator.Create("...");
var names = instance.Select(" = 'Ewout')");

Since we’ve also implemented IElementNavigator on top of the POCOs, you can now quickly select any part of an in-memory instance or run FhirPath invariants on top of POCOs.

Underneath, a FhirPath compiler will turn these FhirPath statements into ordinary .NET lambdas, so if you execute the same statement multiple times, you’ll be running native lambdas, not re-interpreted strings.

Retrieving conformance resources

Most of us will sooner or later need the metadata for Resources, called the conformance resources in FHIR. These allow you to get information about which elements are members of a given resource type, which datatypes exist, cardinality of elements etcetera. The .NET API has a new abstraction (based on previous work that’s been around for a while), the IResourceResolver.

It’s main method is ResolveByCanonicalUri() and it will get you the conformance resource (StructureDefinition, ValueSet, OperationDefinition etc) with a given uri. All core datatypes have uri’s like, but these could of course also be more specific profiles, like Argonaut’s ‘DAF Patient’.

We have provided implementations for locating these resources within a zip file (like that is available on the FHIR website) and within a directory where your application is running. Resolvers can be cached and combined, so you can have a resolver that first tries your local directory, then the standard zip, then goes out to the web.

This is how you would locate the core profiles for, say, Patient:

IResourceResolver zipResolver = ZipSource.CreateValidationSource();
StructureDefinition pat =
as StructureDefinition;

Of course, you can write your own implementations for these interfaces. We did so for the registry Simplifier, where we needed an IResourceResolver that resolves uris to resources using the database of profiles present in the registry.

These abstractions are used by the validator and the terminology modules to retrieve referenced profiles and valuesets when they are encountered.

Navigating StructureDefinition’s hierarchy

If you have worked with StructureDefinition, you know the pain of dealing with its flat list of Elements: even though the StructureDefinition expresses a (deeply) nested tree of elements, these are represented as a flat list (with hierarchical paths). You’ll generally need a lot of smart string manipulation to just find “the next sibling” for an element, or move back to its parent. To avoid this kind of code creeping all over the place in the API, we developed an internal class that we have now made public, the ElementDefinitionNavigator. It’s highly optimized for speed and low-memory use so you can now move around the hierarchy expressed by the StructureDefinition with ease:

StructureDefinition def = zipResolver.FindStructureDefinitionForCoreType(FHIRDefinedType.Patient);
ElementDefinitionNavigator nav = new ElementDefinitionNavigator(def);


Creating Profile snapshots

Maybe not a feature everyone will need, but if you are authoring profiles you will need at some point to turn your changes to a profile (“the differential”)  into a snapshot, that can be used as input for e.g. rendering and validation tools. This is a pretty complex job, and Grahame Grieve, Chris Grenz and our own Forge developer Michel Rutten have been busy to make sure our tools can read each others outputs. Mind you, that means long nightly sessions on the subtleties of merging differentials into bases – but Michel packed it all in the new SnapshotGenerator. It builds on the resolver and ElementDefinitionNavigator above to do its work and using it looks deceptively simple:

StructureDefinition def = myResolver.FindStructureDefinition("");
var gen = new SnapshotGenerator(myResolver);

// Regenerate the snapshot for the StructureDefinition

And yes, it will handle your re-sliced re-slices.

Terminology services

Terminology is a vast and complex area, and we have no intention to provide a full-fledged terminology server in the API, but there’s now at least ITerminologyServer and its lightweight, in-memory implementation LocalTerminologyServer. It supports ValueSets with define and compose statements, however it does not support filters. It’s designed to handle most common uses of ValueSet, but will happily throw NotSupportedException if you push it too far or you reach a certain maximum number of concepts in your expansion.

The LocalTerminologyServer depends on two pieces of new functionality:

  • The `ValueSet` class now has methods to search by code/system in an expansion
  • There is a new class ValueSetExpander, which would expand a valueset in-memory (with the caveats mentioned above)

Most likely we will still add another implementation called MultiStrategyTerminologyServer that would first use this local server, and when that fails, reach out to a real terminology server and use the FHIR REST Terminology operations to get what it needs.


That brings us to the most interesting addition for most people, the validator. It’s been a long-standing wish to add this to the .NET API, but thanks to support from UK’s NHS Digital, we have been able to work on this as well.

If you have been reading about the new features above, you can see these are all pillars that make validation possible:

  • A flexible poco-less way to efficiently read any kind of instance FHIR data (from file, xml, json, memory, database) – the `IElementNavigator`
  • A way to retrieve StructureDefinitions and other conformance resources to validate the instance against – the `IResourceResolver`
  • Generate snapshots for these definitions that serve as input to the validator
  • Be able to run constraints formulated in FhirPath
  • Be able to validate bindings using `ITerminologyService`

Mix these together, add some smart validation logic and you get the Validator.

It is not completely done yet -of course we left slice validation till the end- but it does most other constraints, including binding and simple extension validation. It handles Bundles and aggregation modes as well.

Learning more

While writing this blog I realized how much functionality we have added, and of course I had to leave lots of details out of the description above. Best way to get started with it is by running the validator demo and looking at the code and unit-tests. No, indeed, we have no online documentation about all of this yet. Another great way to learn more is to visit the upcoming HL7 FHIR DevDays where I will have a session on these (advanced) uses of the FHIR .NET API.

For now, let me get back to my Visual Studio and try to get it all polished and finished up for you.

Metadata in FHIR

Last week, I implemented the $meta, $meta-add and $meta-delete operations in Ewout’s FHIR .NET API . This gave me the idea to write a blogpost about how FHIR metadata has changed between DSTU-1 and the current version of FHIR (v0.4.0).


In DSTU-1 metadata consists of:

  • LogicalD – Constant for the entire lifetime of the resource on the server.
  • VersionID – Changes every time the resource changes.
  • Last Modified Date – The date the resource was last modified.
  • Tags –  Expressions of out-of-band data, for example example a profile tag, which stated to which profile(s) the resource conformed to, or a security tag where you could put a security label on your resource see:

In DSTU-1 the metadata was positioned outside the resource, in an “envelope” that also contained the resource. On the wire, this data was put into headers (the Category and Content-Location headers to be precise).

What’s changed?

Now let’s talk about DSTU-2:

In DSTU-2 the metadata moved from outside the resource to inside the resource. The LogicalId has moved into the Resource itself and there is now a “Meta” data type that can exist on every FHIR resource. This datatype looks like this:


And can be accessed like:


As you can see, this data type has similar attributes as the metadata from DSTU-1. There is still versionID and lastUpdated. Deleted is new, and defines if a resource has been deleted or not. the profile and attribute data types were tags in DSTU-1, and have now become attributes of the Meta datatype.

Why this change?

The Metadata was moved inside the resource, because people expect it to be. If you want to fetch a resource from a FHIR server you would expect that you get a resource returned, not an envelope with metadata that also contains a resource.

You want to access the resource attributes like (DSTU-2):

var pat = client.Read("Patient/4"); 
var name = pat.Name; 
var id = pat.Id;


var patEnvelope = client.Read("Patient/4"); 
var name =;

As well, other REST API’s had comparable data, but put the data side-by-side with the actual Resource data – no enveloping involved.

The $meta operation

The $meta operation can be used to access metadata, without getting the resource returned. The operation can be performed on different levels:

On server level:


On resource type level:


On these levels, the $meta operation is used to get a summary of all the labels that are in use across the system. The principle use for this operation is to support search e.g. what tags can be searched for. At these levels, the returned “meta” will not contain versionId, lastUpdated, deleted etc.

The operation can also be performed on instance level:


And even on history level:


At these levels, the $meta operation returns the same meta as would be returned by accessing the resource directly. This can be used to allow a system to get access to the meta-information for the resource without retrieving the full resource itself, e.g. for security reasons.

$meta-add and $meta-delete

Using these $meta-add and $meta-delete operations we can easily change the metadata on the resources. These operations can be performed on the instance and history level:




The metadata to be added or deleted is sent inside the body of the REST-call represented in a Meta datatype. This will look something like:

<?xml version="1.0" encoding="UTF-8"?>
<Parameters xmlns=""> 
      <name value="Meta"/>
        <versionId value="2"/> 
        <lastUpdated value="2015-03-17T10:57:34+01:00"/> 
        <deleted value="false"/>
        <profile value=""/> 
           <system value=""/> 
           <code value="CEL"/>
           <display value="Celebrity"/> 

Funny thing about this, is that these operations are currently the only one that can change a previous version of a resource. Note that changing the metadata does not change the version-id. This is because the content in meta does not affect the meaning of the resource, and the security labels (in particular) are used to apply access rules to existing versions of resources.

These new operations are currently in the newest version of the FHIR .NET API . However, we can’t test them yet, because no server has implemented the operations yet. I will post an update on this blogpost once we can get a proper response on the operations from one of the DSTU-2 FHIR servers.

Supporting FHIR Json and Xml in .NET

We (in this case me, and my colleague Martijn) are currently writing the Profile validation subsystem for the .NET reference implementation. If you have spent time looking at the Profile specification you can probably sympathise that this is nor an easy task nor a matter of a few days of frantic hacking. In fact, we both consider this to be the perfect task to spent our summertime with, and that’s exactly what we’re doing.

As part of building the validator, we needed a solution to validate both Json and Xml FHIR instances, without writing the infrastructure twice. There are basically two possible scenario’s to handle this:

  1. Write an abstraction on top of Json and Xml and make the validator work with that (in fact, that’s what I have do with the .NET parsers)
  2. Use the existing parsers to parse the Json into POCO’s, and serialize them back out to Xml, then validate the Xml.

Although the second approach is certainly feasible, it would mean parsing Json instances into memory structures, and serializing them out, just for validation. It seems like a waste.

XPathNavigator to the rescue

Luckily, .NET has a little know feature that allows us to take the first approach and stay native to the platform: XPathNavigator. This little abstract class allows you to implement a few specific navigation methods over any datastructure, and then pass it to the existing XML infrastructure. Once you’ve done that, you can execute XPath statements and run Xslt stylesheets over your datasource, as if it was Xml in the first place. People have been implementing this interface on top of the Windows Registry, a filesystem, LDAP, you name  it.

You’ve probably guessed it: we implemented in on top of Newtonsoft’s popular Json parser . We’ve taken care of all the weird FHIR-specific Json features (like the extensions in _members and irregular mapping to Xml of Binaries), and you can just treat the Json as XML. So, for example, if you take a look at the Patient in this test file, you can now run the following code:

 var nav = new JsonXPathNavigator(new JsonTextReader(new StreamReader(file)));
 var text = nav.SelectSingleNode("/f:Patient/f:telecom[2]/f:use/@value", mgr);
 Assert.AreEqual("work", text.Value);

Note that I certainly did not write the SelectSingleNode() function, all the Xml infrastructure now just works, even an Xslt transform:

 var nav = new JsonXPathNavigator(new JsonTextReader(new StreamReader(file)));
 XslCompiledTransform xslt = new XslCompiledTransform();
 xslt.Transform(nav, outputStream);

As a result, we can now run the invariants provided in the spec on Json instances and build our validation logic only once.

If you’re interested, you can take a look at the JsonXPathNavigator.cs here. It will be included in the NuGet package on the next release.