Back to Pilotfish Home

HL7 & EDI to DB/Web Services


    HL7, EDI Routed to Database and Web Services

    This is a demonstration of the PilotFish eiConsole IDE, a graphical integrated development environment for the rapid configuration of any Healthcare interface. This demo walks you through the 7 stages of interface building with HL7 and X12 EDI data sources that are routed to a database and a web service.

    When you open the eiConsole IDE you’ll come here to the route file management screen, kind of the home screen. You can think of these as project folders pointing to some directory as you’ll see above. These project folders are full of different routes that we have done for other clients, just some representative routes are shown such as HIE integration, where we take data acquisition and normalization of health information exchange data. The receipt of laboratory information for orders and results. We’ve done some on-premise medical equipment integrations such as smart cards and hospitals that must be integrated with pharmacy or EMR billing systems. Also an example of some work with the acquisition of clinical and administrative data for reporting and analytics.  We’ll actually jump into this route to go ahead and walk through an entire route with you.

    Opening the Interface Project Folder with Multiple Routes

    You can see once we open a project folder, we might have multiple routes within that. I’m going to open up this one to walk through it. When you open this main screen, this is the type of screen that we work from. You can think of us moving through an assembly line approach of stages moving from left to right, down the screen through each of these 7 stages to create an end-to-end route. Everything here on my left-hand side is having to do with my Source System and where I’m getting that data from, and everything here on the right is having to do with my Target System or where I’m sending that data to.

    Defining 3 Incoming Feeds of HL7, EDI and FHIR JSON Data

    In the case of this interface, we have some HL7 coming in, maybe from some hospital over MLLP. We are also dealing with some EDI, maybe coming in from a provider practice that we’re going to be picking up from FTP or SFTP.  Then, we have some FHIR and JSON format that we’re receiving over web service. We’re going to take those three feeds of data and actually transform them into something that we can then send on.

    First, on this second target line, we will send it to a SQL database to store it in and then we are going to send a copy of that information to a web service in JSON. I’m actually just going to walk through each of these 7 stages from front to back, describing what they do and what the features are as well as what’s happening within this interface.

    Documenting the Source System

    This first stage called Source System is just for documentation purposes so that when you open up any given route you can see graphically what’s happening within the route. If you have multiple resources working on the interfaces, everyone can be on the same page. We’ll just give it a name and optionally add an icon to represent it. You can even tag some metadata below if we want to search for it later, but really no functionality yet, other than just documentation purposes.

    Defining Connectivity Protocols for the Listener Stage

    I’ll move to the first actual functional stage which is called the Listener stage. What we’re doing here is defining what the connectivity protocol is to receive that source data. You can see here in this drop-down that we have a lot of different connectivity protocols out-of-the-box, probably all that you can think of. Here we can basically just pull some AWS buckets if we need to or a database or we can pick things out of email, and all the flavors of FTP and SFTP as well.

    In this case, we’re using HL7 over LLP or MLLP as we call it. All the flavors of HTTP as well – even some more esoteric things that we have from working with clients – cueing systems such as Kafka, RabbitMQ, MSMQ, maybe some offshoot databases like MongoDB or Hadoop all the varieties of SOAP and REST as well and pretty much any other thing you can think of. (List of available PilotFish Listeners).

    If you don’t see a connectivity protocol here, anytime you see ellipses like this, it’s just a point of extension which you can do in Java or .NET. And you’ll see below here, just simple configuration boxes. Once I choose the connectivity protocol I’m going to be working with, it may have some kind of required fields to fill in but we try to keep all the defaults in line for use. Also, I’ll point out within this stage we do have what we call pre-processors. We have a lot of them out-of-the-box. Just think of these as things to kind of do clean-up to data such as decompression, decryption, maybe pulling out a PDF or working with things like that, you can apply them here. (List of available PilotFish Processors).

    Incoming EDI Data and Applying the SNIP Validation Processor

    Here on my second source, you can see that I’m working with EDI and for this Listener I do have an FTP/SFTP Listener where I’ve filled in some of the required fields. I just want to point out that we do have a processor applied in this case. This is kind of a new feature that we’re coming out with for EDI, which includes a SNIP Validation Processor so we can actually apply SNIP validation through selecting the checkboxes.

    Within this EDI SNIP Validation Processor, you can see that we do have Types 1-5 here available with a checkbox. So if we’re working with Types 1-3 you know these are built-in rules delivered directly from the X12 implementation guides and schema. With Type 4 we’re working with more robust semantic rules, and with Type 5 we provide all external codes in the database. We can also handle Type 6 and Type 7 as a custom configuration defined when you work with our services team.

    Transforming All Data into a Common XML Format

    So moving on to our next stage here – the Source Transform stage. So what happens here, happens in two parts. First down here (below on the left), you will see our transformation module as we call it. This drop-down will just define what the inbound data format is so that we can work with it. Really what’s going to happen here is it’s going to take that data, it’s going to automatically transform that into an XML representation of that, so that we can work with it in the transformation.

    You can see in this drop-down that we have all the formats you could think of whether it’s CSV or some delimited file, EDI, HL7 version two and three, as well as JSON or maybe even just name value pairs or Excel sheets. I’m going to choose HL7 so it’ll automatically transform that for me. I can even define what my version of HL7 is although I don’t have to since our HL7 parser is extremely lenient.

    Mapping HL7 Transactions with the Data Mapper in Source Transform Stage

    Over here on the right-hand side is where I’m actually going to do my data transformation in our data mapper. So I’m going to pop that open right now. Once we open our data mapper you can see here it’s a 3-pane data mapper. Here on this left-hand side, this is what I’m mapping from, in this case, I am mapping from HL7 since I’m expecting that inbound.

    Over here on the right-hand side, this is what I’m mapping to. You can see here in this case I’m just going to pull out some basic patient demographic information that I want to keep in store for later out of each message. We do have features like “Friendly Names“, as we call them, so if you’re not super familiar with HL7 or the schema, you can see here with actual human-readable text what each of these different values are within each segment of the message.

    In here, in the middle, this is our actual mapping tree. This is all achieved through drag & drop onto the center. What’s really happening under the covers is the writing of W3C compliant XSLT. This is a fully-featured editor with auto-completion and all of that so if you have resources that are more familiar with XSLT you can use this view and anything that you do in here will show up in this drag & drop view and vice versa.

    Since we do use XSLT, which is very robust, for transformations we do provide, up in this tool palette, all of those same functions that you might write in that other view through drag & drop as well. We also have inline testing for the data mapper so you can feed it a sample file, hit a button, and see your output below what’s been mapped already just to make sure that all the fields are in the correct places.

    Using Routing Rules for Data Targets and Transaction Monitoring

    Moving on to the next stage – this is our Routing Stage. I’ll show you two things. First, is our routing rules tab. We might have more than one target as I do in this case and I can choose to send all the data that I’m transforming to either all targets or maybe I want to even choose something like this XPath in the drop-down which will let me use XPath rules to choose where a message will go. Maybe route it based on some attribute within that message. If I want the message to go to this location, but not the other one, based on an attribute, I can define that there.

    I want to point out that this is where our transaction monitoring lives. We do have a lot of transaction monitor types out-of-the-box. We can send email alerts to some team or even more importantly we have what we call an error route trigger so that if there is some kind of error we can actually kick off a whole new route that will take place and you can define what will happen there.

    Once I move to the next stage, the Target Transform, we’re really just kind of doing everything that we just did in the Source Transform, only in reverse. I may need to do another mapping in this case, or I may not. In this case, I am since I have two different targets. I’m going to have another mapping on to our first target, our SQL database and the second are JSON.

    I’ll pop open this mapping again really quick to show you. Now that I’ve opened this new mapping, you can see that we’ve taken out that basic patient demographic information from here on the left, that’s what I’m mapping from. Over here on the right, what I’m mapping to is what we call SQL XML this is something that we have out-of-the-box as well so I can actually map directly onto maybe a SQL insert statement like I’m doing here.

    You can see over here on the right I’m actually using my insert function and I can actually pull in the tables I want to work with. In this case, I have a table called patient with these columns that actually match up to my actual messages that I’m going to be receiving and so I can actually just drag & drop this onto the center and create SQL insert statements each time for each message that I get into those columns.

    Down here below on my Target Transform for the JSON, I’ll pop open that mapper really quick to show you what that looks like. You can see on the left here the same thing I’m working with that is just basic patient demographic information. And here on the right, I’m actually mapping that onto a JSON object that I will then send onto that web service.

    Also, I’d like to point out that within this stage of the Target Transform, on the right-hand side, we have our transformation module again. In that same drop-down, that looks just like the first one, I’m going to choose what my outbound data format should actually be and it’s going to automatically transform that XML back out to JSON or maybe a CSV or EDI or whatever the case may be.

    Defining the Connectivity Protocol for the Target System

    Moving on to the last stage here – our Transport stage. This works just like our Listener, we’re just defining the connectivity protocol up to that Target System. So all of the same connectivity protocols here are out-of-the-box. In this case, I’m actually using a SQL database Transport, so I’ll just maybe enter some basic credentials to get into that database and it’s going to go ahead and post that there. (List of available PilotFish Transports).

    Also to point out we have all the same processors here as post-processors that we might need to do some clean up on the data if we need to do some encryption or compression again or maybe throw a pdf back in a file a lot of things that we can work with here in this case. (List of available PilotFish Processors).

    This last stage, just to touch on again, is just like the first – our Target System. Here we are just going to give it a name and optionally some icon represented just for documentation purposes we can see what’s happening within this route.

    Inline Testing and Debugging the Interface Route at Each Stage

    So once I’ve kind of configured this end-to-end I can switch it over to our testing mode so we do have full inline testing and debugger. In this case, I can start a test from anywhere I want within this and anywhere that you see an arrow here is where that test may potentially go. Better yet, I can save tests, which I’ve done in this case, to run over and over if I need to.

    So I’m going to go ahead and open up a saved HL7 test that I have here and you can see it’s going to start after my Listener stage since I’m not actively listening on a port right now it’s going to feed it a sample file and then just run it through the rest of this route. I’ll go ahead and hit this button and you’ll either see a green checkmark for success or red ‘x’ for failure, and we can walk through what just happened within that route.

    First here within this test, I did use a sample file that is fed into the same test so I’ll open that up you can see this is a sample HL7 file that is flattened right now. We took in that file, and first that Transformation module here took that and automatically converted that into an XML representation of that HL7 message so that is what you’re seeing here. Once we had that, we use the XSLT to then parse out that basic patient demographic information that we wanted to keep and send on so that is what that looks like here.

    Moving down the line here, we went ahead and did another mapping of XSLT where we actually created that insert statement which will automatically be sent on to the database. And down below here, you can also see that XSLT where we went ahead and transformed that into a JSON object that will go outbound to that web service.

    Also, taking a look at that Transformation Module on the way out I can see here that it went ahead and converted from that XML representation into my actual JSON structure that will go outbound there automatically. In the case of a red ‘x’ for failure, in this case, I know that that endpoint is not active right now and so it’s going to give me a stack trace to show me that my connection was refused but you’re really just going to kind of jump between this testing mode and editing mode back and forth making small changes and tweaks as needed and then ultimately deploy this to the eiPlatform.

    Deploying Interfaces into Production

    You can then deploy that in multiple ways, you can use the actual IDE itself to drag & drop files down for hot deployment by connecting up to an eiPlatform server all within the same screen. You can also use the eiDashboard which is our web-based monitoring tool to go ahead and upload those files there and for hot deployment as well or you can actually just take the files that are created behind the scenes and move that to a production environment. We recommend using some kind of source control such as Git or SVN.

    Try It for Yourself

    We recommend downloading a Free 90-Day Trial of the eiConsole and taking a look at it yourself. More PilotFish videos demonstrating other software features are listed on the summary PilotFish Product Video page. If you have any questions, don’t hesitate to ask.

    If you’re curious about the software features, free trial, or even a demo – we’re ready to answer any and all questions. Please call us at 813 864 8662 or click the button.

    HL7 is the registered trademark of Health Level Seven International.
    X12, chartered by the American National Standards Institute for more than 35 years, develops and maintains EDI standards and XML schemas.

    This is a unique website which will require a more modern browser to work! Please upgrade today!