A few weeks ago, over on PedTalk, I promised I’d make a public walk-through of an insurance fee schedule offering so folks can see the math that I like to do. We’re opening Pandora’s Box here and I don’t want this entry to be 10 pages long, so we’re ignoring some important aspects of this contract offer in order to focus on the math (namely – payer mix, practice coding behavior, why some codes are not paid at all, etc.).
Let’s recap what we’re doing. A dear client wrote to say that she had received a spreadsheet of a potential fee schedule and took me up on my request to analyze it. If you open the spreadsheet above, here’s how it reads:
- Everything in green is from their original spreadsheet.
- Everything in orange is from a Partner report I ran.
- Everything in yellow is calculated within the new spreadsheet.
As spreadsheets go, this one is remarkably nice – as you can see, it outlines a stepped offer with two price increases over 36 months. The payer has provided code-by-code detail and even indicated the expected revenue change.
The first thing I did when I received the spreadsheet was check the math. I have seen more than one payer offer not add up correctly and you know which way the mistake always goes. My quick dash through it indicates that their math looked good – if, indeed, those are the expected fees and volume for this practice, it adds up.
But when I talk about doing the math, it’s that “if” above that I focus on.
So, the second thing I did was to run a comparative report on the practice’s system. All you Partner clients and most of the rest of you can do this easily. My goal was to compare what the insurance company says was the volume and revenue for the charges to what the practice actually sees. I then dropped my report spreadsheet into the spreadsheet from the payer and lined up the procedures. This was, by far, the most time-consuming part of the process.
The first thing I noticed is that there were procedures expected from both sets of data that were not on the other set. I don’t know the exact time sample that the payer used for their expected unit count, so I’m not surprised they differ, even by a fair amount. I’m not, yet, going to be cynical. You’ll see that there is a $185,000 difference (F100 vs. J100) which can be explained by both the time sample and any difference between how payer patients are counted on the practice system (if they haven’t configured their insurance groups just perfectly, for example).
The per-charge-revenue differences are closer-than-usual, I find, especially for the large-volume codes. It might be worth looking into a vew individual procedures that have decent volume but a difference that costs (99173, 99223).You’ll see that there is a 1.4% difference between the expected payments and what the practice really received – 1.4% is, sadly, in the acceptable range for uncollected balances, particularly as more and more of your insurance balances end up with the patients.
If we assume that the practice’s procedure distribution is closer to reality than the payer’s – or at least, reality may be some where in between – we simply need to calculate the new revenue based on the offered prices and the real volume. You will see the results in columns N and Q. The results are straightforward – it looks like a 4.8% increase out of the gate, followed by a second increase of 3.5% (N102 and Q102).
Without any context, I’d say a 36 month contract that puts you ahead by 8.3% over 3 years is pretty good in this climate. However, the context is very important! If this payer is only at 85% of Medicare to start, this is an insulting offer.
Your milage may vary, contents may settle during shipping.
[Special thanks to Philomena Smith!]