Back to Pilotfish Home

EHR/EMR Integration

     

    EHR/EMR Integration with HL7 and FHIR Interface

    This is the eiConsole, an integrated development environment for building, deploying, maintaining and testing integration interfaces.

    When you first start the eiConsole, you’re shown this route and file management dialog which shows your currently selected working directory. So now notice the project folder and all the interfaces that currently exist within there.

    PilotFish is being leveraged in virtually every area of healthcare. Here we have a selection of demo interfaces that are loosely based on actual client implementations.

    Configuring an EHR/EMR Interface End-to-End

    Today we’ll take a look at Interface 8 – EMR Integration. When you open an interface, you’re shown the main eiConsole screen. This table represents the topography – the workflow of the interface. It’s organized into rows and columns within this table. Each row on the left represents the Source System. So here we have two Source Systems, each row on the right represents a Target System, two Target Systems.  

    Each of the 7 columns represents one of the 7 stages in our Assembly Line approach to building interfaces. Users start by adding as many Sources or as many Targets as they’d like moving left to right.

    1) Configuring a Source System that stores information about the system you’re connecting to.

    2) A Listener to actually pull information from that Source System.

    3) A Source Transform since each Source System might be providing you data a different format. The job of the source transform is to turn the data into a common representation.

    4) A routing stage is responsible for determining which Target Systems a given message goes to.

    5) A Target Transform for each Target System that might need a different format or representation of the same data.

    6) A Transport for sending the data out. 

    7) A Target System to describe what name and information about that system might be.

    Defining How the Data Comes Into Your Route

    After adding your sources and targets, you’ll start at the first stage we call Source System. The first stage or Source System is a place to provide a system name. You can also choose an icon that best represents what system it is you’re connecting to.

    The second stage, the Listener, is where you’ll actually decide how to get information from the Source System. You do that by selecting a Listener type from this drop-down here.

    You could poll a database, directory, email, FTP,  lots of flavors of HTTP, queues and more esoteric means. If you can think of a way to send or receive data there’s a Listener available.

    After you’ve selected the Listener type, you’ll simply provide some configuration item information down here. So our first Source System is accepting CSV files in a directory, our second one is accepting EDI files coming in over FTP.  After a Listener accepts or creates a transaction, it will move left to right through each of the subsequent stages in your interface. The first set of stages it’ll go through are processors.

    Processors perform high-level operations on data, right after it comes in the system or right before it goes out. This can be things like decryption, character conversion or compression. We ship with quite a few processes available. Each Source System has a Source Transform associated with it.

    The job of the Source System is to take whatever particular Source System provided and convert it to a common representation of that data so that by the time it reaches this Routing stage, all of your data is in the same format. This happens in two parts.

    The first step is a transformation module to take any non-XML format and convert it to XML. This could be something as simple as a CSV, a delimited fixed-width file, like a COBOL copybook or terminal scrape. The EDI, those are the 837, 834, etc., HL7 2.x, JSON, Microsoft Excel, or if you know you’re receiving XML already such as the CDAs, CCDAs or the FHIR Standard, you would choose no transformation.

    The second step is an XSLT transformation to convert the XML you’ve received or created in your common format. We’ll revisit this stage a little bit later on the Target side to demonstrate how we convert to an HL7 XML representation.

    Routing Your Data to Your Target System

    The next stage is the Routing stage. The job of the routing stage is two-fold. This is to determine which of your Target Systems the given message is going to go to. That can be all Targets, you can do round-robin load balancing between multiple systems or you can set up arbitrarily complex expressions around the content or metadata about a message.

    The second job of the Routing stage is to determine what happens in the event of an error. You do that by defining transaction monitors. Transaction monitors listen for errors anywhere in the interface you’ve defined. That can be as something as simple as sending an email or you can set-up an entire workflow and treat your error like any other kind of transaction and that will enable you to handle that error in any way you can conceive of.

    When a message enters a Target, we reverse the same order of operations we did for a given Source System. So, there’s a transformation here to convert the XML canonical representation into a different XML format and then a transformation module to convert that to any non-XML representation.

    Mapping Your Data and Transforming It Into a Common XML Format

    The tool we use for building out these transformations is our Data Mapper. We’ll open this up for an HL7 2.x Transform. This is a 3-pane mapping tool. On the left-hand side, we have our source format – what you’re mapping from. In this case, our common representation is just representing some basic patient information. So, for two Source Systems, we’ve just provided outpatient data. On the right-hand side, you have your Target format – this is what you are mapping to. In this case, we’ve loaded the entire HL7 2.x standard and we’re mapping to an ADT A01 feed.

    In the center panel, you have your actual mapping logic. You build out this mapping by dragging & dropping from your source and target format into this panel here.

    Up at the top, you have a palette of all your XSLT and XPath functions. These allow you to do conditions, iteration, flow control and really anything else you need to facilitate mapping. The reason we have this third panel in the center here for facilitating these mappings is that we find it grows well and scales as mapping complexity increases.

    What a lot of tools do is give you a left-hand side, a right-hand side, and have you drag lines between them. And that works great for a few fields, maybe a dozen, even a demo; but once you have hundreds of fields, you have repetition. All of the logic that’s part of a real-life mapping that model breaks down very, very quickly.

    For our tool, this tree just happens to get a little bit longer. Working with the center panel it’s just a matter of dragging & dropping. So we’ll take our PID-5 (the patient name), take the family name here and the last name and just delete that node. So, click delete. On the right-hand side, if we navigate down to the same field, we’ll see that this doesn’t have the checkmark like some of the others do and it’s not italicized.

    So right off the bat, the Data Mapper will tell you what fields you have and have not mapped. To recreate that we can just drag this element onto its parent. So, we’ll drag this on a PID-5 patient name and it’s going put it in the correct order. And now you just have to make a determination as to how you’re going to populate that particular field. You’ve got this tool palette up here if you need to do some kind of code call out or another way to augment that data. We could hard code it or if we want to populate it from our Source System; we can just take the last name and drag it onto XPN1.

    Underneath this is producing and consuming W3C compliant XSLT. So, we can see the XSLT tag that was created by dragging and dropping that here. Now, this is a fully-featured code editor. Any changes you make here will show up in the UI (User Interface) and vice versa. It has auto-completion and all the other things you would expect, but you never have to drop down into this view.

    The only reason we show it – is to demonstrate this is again W3C compliant XSLT. There’s nothing proprietary about the transformation logic being used here. Also featured within the Data Mapper, is a fully-featured testing facility. So you can switch this testing tab, provide a source sample, push a button and see the output.

    There’s also a debugger for doing the step-by-step to see exactly what’s going on at each stage of that transformation. When you first open up a mapping, you’ll usually start by defining your Source and Target formats. You can do that by clicking this button and its corresponding button on the source side. You’ll choose a format reader. This can be something like DICOM, EDI, flat files or HL7. You can read directly from a database or SOAP service. In this case, we’ll choose the HL7 2.x, and you can see we will provide an HL7 version and we can even limit it by message type.

    When you’re using the built-in format readers for HL7, EDI and some of the other standards, what these will provide is friendly naming within the Data Mapper. So, we know for example that PID-5 is the patient name, XPN1 underneath there is a family name. Anywhere HL7 or another standards organization has provided information that will be inline as well.

    In an example of coded values like PID-8, you’ll see those in a code tab, so you don’t even need to refer to the external documentation to work with these different formats. Some other features in the Data Mapper – you have the ability to search, extend and sort through the different formats on the left and right. You can even do schema slicing to export these to a schema.

    Another useful aspect of the Data Mapper is that any of the transformations you build in here are inherently reusable. You can import other mappings from within here or use this one and another transformation. That’s the Data Mapper at a high-level overview. And one more thing we’ll stress here is that this is the same tool that’s going be used regardless of what your source or Target System is, for instance, if you’re mapping from a database to HL7, from HL7 to FHIR or from an Excel file to a PDF.

    You would use this Data Mapper in every single case. You don’t need to learn different tools for every single standard you come across. After we’ve completed our mapping, we’ve configured a transformation module here to take that XML representation of HL7 and convert it to the actual HL7 delimited file. The next step is to define a Transport.

    Defining How the Data is Sent Out – Transport

    The job of the Transport, like the Listener, is to determine how data is going to be sent out. You also have a chain of processes available if you need to encrypt, compress or otherwise do some manipulation on that data before it reaches the transport stage. If you can think of a way to send or receive data, there’s a Transport available.

    So, our first Target System here is going to be doing an HTTP POST to a particular URL. Whereas our second one is going to be sending an HL7 message over the MLLP (Minimum Lower Level Protocol). The last stage is to define a Target System. Like the Source System, you’ll provide a name and you have the ability again to choose an icon that best represents that system.

    Testing Your Interface End-to-End

    After you’ve configured your interface, moving left to right following our assembly line approach you’ll test it. To test, you just have to go to route and switch to testing mode. There’s no compilation or deployment, and you can start your test from anywhere within here. What we’ll do in this particular case, is go to our source transform, click start test here, and provide a sample file that represents the data at that point. So here we’ll choose that we’re taking our data from the file and we’ll provide this test CSV sample. Then you go up and push execute test.

    As each stage completes, you’ll get a green checkmark telling you that the stage has completed, and a red X for any failures. You can then view the transaction and how each stage affects it. So, we can view our original CSV we provided it, this delimiter and fixed with file transformation converts that CSV to an XML representation.

    Here we’ve pulled out some basic patient information and a mapping that we defined but didn’t walk through. And then for each of my Target Systems, these are configured to generate that data in a different format. So here the first transform is defining a FHIR transaction.

    Our second mapping, the one we walked through, shows us the HL7 2.x, the result of that transform. And then lastly, for any of these Transports that failed, we can open this up and get some more information about why that particular system wasn’t able to be reached. In this case, we have a 404, and we couldn’t reach that particular Target System.

    Deploying Your Interface

    After you’ve completed your testing, you can return to file management dialog and make a determination as to what you’d like to do at that interface. You can take the directory where it’s stored and check it into a source repository like SVN get or SourceSafe. All of our configurations are stored in plain text XML and folders.

    You can connect up to the eiPlatform server, drag & drop and hot deploy it to be running immediately or you can share on our PilotFish Interface Exchange (PIE), a sort of App Store for interfaces. If you are a vendor and you’d like to provide an interface for your clients or users, you can very easily share that on the PilotFish Interface Exchange (PIE) and they can use and download it. After you deployed an interface, you can monitor it using our eiDashboard.

    The dashboard provides an operational view of all interfaces running on the eiPlatform and real-time statistics for transaction throughput and health. Administrators can manage and monitor all aspects of the integration platform by the eiDashboard. If you are a solutions provider, you can also return to the file management dialog to duplicate, modify or tweak interfaces or interface templates.

    Try the eiConsole Yourself!

    Thanks for watching this demonstration of the eiConsole for HealthcareDownload a Free 90-Day Trial of the eiConsole or review the Product Specifications.

    If you’re curious about the software features, free trial, or even a demo – we’re ready to answer any and all questions. Please call us at 813 864 8662 or click the button.

    HL7 is the registered trademark of Health Level Seven International. 

    This is a unique website which will require a more modern browser to work! Please upgrade today!