EMR Integration Demo of the eiConsole for Healthcare
This is the eiConsole, an integrated development environment for building, deploying, maintaining and testing integration interfaces.
When you first start the eiConsole, you’re shown this route and file management dialog which shows your currently selected working directory. So now notice the project folder and all the interfaces that currently exist within there.
PilotFish is being leveraged in virtually every area of healthcare. Here we have a selection of demo interfaces that are loosely based on actual client implementations.
Configuring an EMR Interface End-to-End
Today we’ll take a look at Interface 8 – EMR integration. When you open an interface, you’re shown the main eiConsole screen. This table represents the topography – the workflow of the interface. It’s organized into rows and columns within this table. Each row on the left represents the source system. So here we have two source systems, each row on the right represents a target system, two target systems.
Each of the 7 columns represents one of the 7 stages in our assembly line approach to building interfaces. Users start by adding as many sources or as many targets as they’d like moving left to right, configuring a source system which stores information about the system you’re connecting to, a listener to actually pull information from that source system, a source transform since each source system might be providing you data a different format.
The job of the source transform is to turn it to a common representation. A routing stage responsible for determining which target systems a given message goes to, a target transform for each target system that might need a different format or representation of the same data, a transport for sending the data out and a target system to describe what name and information about that system might be.
After adding your sources and targets, you’ll start at the first stage – source system. The first stage – source system is a place to provide a system name. You can also choose an icon that best represents what system it is you’re connecting to.
Defining How the Data Comes Into Your Route
The second stage, the listener, is where you’ll actually decide how to get information from the source system. You do that by selecting a listener type from this drop-down here.
So you could poll a database, directory, email, FTP, lots of flavors of HTTP, queues and more esoteric means. If you can think of a way to send or receive data there’s a listener available.
I’ve used the selected listener type to simply provide some configuration item information down here. So our first source system is accepting CSV files in a directory, our second one is accepting EDI files coming in over FTP. After a listener accepts or creates a transaction, it will move left to right through each of the subsequent stages in your interface. The first set of stages it’ll go through are processors.
Processors perform high-level operations on data right after it comes in the system or right before it goes out. This can be things like decryption, character conversion, compression. We ship with quite a few processes available. Each source system has a source transform associated with it.
The job of the source system is to take whatever particular source system provided and convert it to a common representation of that data so that by the time it reaches this routing stage, all of your data is in the same format. This happens in two parts.
The first pair is a transformation module to take any non-XML format and convert it to XML. This could be something as simple as a CSV, a delimited fixed-width file – like a COBOL copybook or terminal scrape. The EDI, those are the 837, 834, etc., HL7 2.x, JSON, Microsoft Excel, or if you know you’re receiving XML already such as the CCD/CDA or the FHIR standard, you would choose no transformation.
The second step is an XSLT transformation to convert the XML you’ve received or created in your common format. We’ll revisit this stage a little bit later on the target side to demonstrate how we convert to an HL7 XML representation.
Routing Your Data
The next stage is the routing stage. The job of the routing stage is two-fold. This is to determine which of your target systems the given message is going to go to. That can be all targets, you can do round-robin load balancing between multiple systems or you can set up arbitrarily complex expressions around the content or metadata about a message.
The second job of the routing stage is to determine what happens in the event of an error. You do that by defining transaction monitors. Transaction monitors listen for errors anywhere in the interface you’ve defined. That can be as something as simple as sending an email or you can set up an entire workflow and treat your error like any other kind of transaction and that will enable you to handle that error in any way you can conceive of.
When a message enters a target, we reverse the same order of operations we did for a given source system. So, there’s a transformation here to convert the XML canonical representation different XML format and then a transformation module to convert that to any non-XML representation.
Mapping Your Data
The tool we use for building out these transformations is our Data Mapper. We’ll open this up for an HL7 2.x transform. This is a 3-pane mapping tool. On the left-hand side, we have our source format, what you’re mapping from. In this case, our common representation is just representing some basic patient information. So, for two source systems, we’ve just provided outpatient data. On the right-hand side, you have your target format, this is what you are mapping to. In this case, we’ve loaded the entire HL7 2.x standard and we’re mapping to an ADT A01 feed.
In the center panel, you have your actual mapping logic. You build out this mapping by dragging and dropping from your source and target format into this panel here.
Up at the top, you have a palette of all your XSLT and XPath functions. These allow you to do conditions, iteration, flow control and really anything else you need to facilitate mapping. The reason we have this third panel in the center here for facilitating these mappings is that we find it grows well and scales as mapping complexity increases.
What a lot of tools to do is give you a left-hand side, a right-hand side, and have you drag lines between them and that works great for a few fields, maybe a dozen, even a demo but once you have hundreds of fields, you have repetition, all of the logic that’s part of a real life mapping that model breaks down very, very quickly.
For our tool, this tree just happens to get a little bit longer. Working with the center panel it’s just a matter of dragging and dropping. So, what we’ll do is we’ll take our PID-5, the patient name, take the family name here at the last name and just delete that node. So, click delete. On the right-hand side, if we navigate down to the same field, we’ll see that this doesn’t have the check mark like some of the others do and it’s not italicized.
So right off the bat, the Data Mapper will tell you what fields you have and have not mapped. To recreate that we can just drag this element onto its parent. So, we’ll drag this on a PID-5 patient name and it’s going put it in the correct order. And now you just have to make a determination as to how you’re going to populate that particular field you’ve got this tool palette up here if you need to do some kind of code call-out or another way to augment that data.
We could hard code it or if we want to populate it from our source system, we can just take the last name and drag it onto XPN. Underneath this is producing and consuming W3C compliant XSLT.
So, we can see the XSLT tag that was created by dragging and dropping that here. Now, this is a fully featured code editor. Any changes you make here will show up in the UI and vice versa. It has auto-completion and all the other things you would expect, but you never have to drop down into this view.
The only reason we show it is to demonstrate this is again W3C compliant XSLT. There’s nothing proprietary about the transformation logic being used here. Also featured within the Data Mapper is a fully- featured testing facility. So, you can switch this testing tab, provide a source sample, push a button and see the output.
There’s also a debugger for doing the step-by-step to see exactly what’s going on at each stage of that transformation. When you first open up a mapping, you’ll usually start by defining your source and target formats. You can do that by clicking this button and its corresponding button on the source side.
Formatting Your Data into Common Standard
You’ll choose a format reader. This can be something like DICOM, EDI, flat files, HL7. You can read directly from a database or SOAP service. In this case, we’ll choose the HL7 2.x. And you can see we will provide an HL7 version and we can even limit it by message type.
When you’re using the built-in format readers for HL7 EDI and some of the other standards, what these will provide is friendly naming within the Data Mapper. So, we know for example that PID-5 is the patient name, XPN1 underneath there’s family name. Anywhere HL7 or another standards organization has provided information that will be inline as well.
In an example of coded values like PID-8, you’ll see those in a code tab, so you don’t even need to refer to the external documentation to work with these different formats. Some other features in the Data Mapper – you have the ability to search, extend and sort through the different formats on the left and right.
You can even do schema slicing to export these to a schema. There’s a built-in HL7 Differencing Engine to allow you to facilitate mappings between different versions so if you need your map from HL7 2.5 to 2.6 you could push a button or generate the map and you just need to make some adjustments.
Another useful aspect of the Data Mapper is that any of the transformations you build in here are inherently reusable. You can import other mappings from within here or use this one and another transformation. That’s the Data Mapper at a high-level overview. And one more thing we’ll stress here is that this is the same tool that’s going be used regardless of what your source or target system is. So, if you’re mapping from a database to HL7, to HL7 to FHIR, from an excel file to a PDF.
You would use this Data Mapper in every single case. You don’t need to learn different tools for every single standard you come across. After we’ve completed our mapping, we’ve configured a transformation module here to take that XML representation of HL7 and convert it to the actual HL7 delimited file. The next step is to define a transport.
Defining How the Data is Sent Out
The job of the transport, like the listener, is to determine how data is going to be sent out. You also have a chain of processes available if you need to encrypt, compress or otherwise do some manipulation on that data before it reaches the transport stage. If you can think of a way to send or receive data, there’s a transport available.
So, our first target system here is going to be doing an HTTP POST to a particular URL. Whereas our second one is going to be sending an HL7 message over the MLLP (Minimum Lower Level Protocol). The last stage is to define a target system. Like the source system, you’ll provide a name, and you have the ability again to choose an icon that best represents that system.
Testing Your Interface End-to-End
After you’ve configured your interface, moving left to right following our assembly line approach, you’ll test it. To test, you just have to go to route and switch to testing mode. There’s no compilation or deployment, and you can start your test from anywhere within here. What we’ll do in this particular case, is go to our source transform, click start test here, and provide a sample file that represents the data at that point. So here we’ll choose that we’re taking our data from the file and we’ll provide this test CSV sample. Then you go up and push execute test.
As each stage completes, you’ll get a green check mark telling you that the stage has completed, and a red X for any failures. You can then view the transaction and how each stage effects it. So, we can view our original CSV we provided it, this delimiter and fixed with file transformation converts that CSV to an XML representation.
Here we’ve pulled out some basic patient information and a mapping that we defined but didn’t walk through and then for each of my target systems these are configured to generate that data in a different format. So here the first transform defining a FHIR transaction.
Our second, the one we walked through, shows us the HL7 2.x the result of that transform. And then lastly, for any of these transports that failed, we can open this up and get some more information about why that particular system wasn’t able to be reached. So, in this case, we have a 404, and we couldn’t reach that particular target system.
Deploying Your Interface
After you’ve completed your testing you can return to file management dialog and make a determination as to what you’d like to do at that interface. You can take the directory where it’s stored and check it into a source repository like SVN get or SourceSafe. All of our configurations are stored in plain text XML and folders.
You can connect up to the eiPlatform server, drag and drop and hot deploy it to be running immediately or you can share on our PilotFish Interface Exchange (PIE), a sort of App Store for interfaces. If you are a vendor and you’d like to provide an interface to your clients or users, you can very easily share that on the PilotFish Interface Exchange and they can use and download it. After you deployed an interface, you can monitor it using our eiDashboard.
The dashboard provides an operational view of all interfaces running on the eiPlatform and real-time statistics for transaction throughput and health. Administrators can manage and monitor all aspects of the integration platform by the eiDashboard. If you are a solutions provider, you can also return to the file management dialog to duplicate, modify or tweak interfaces or interface templates.