Consolidated Tapes are Good in Theory, Completely Wrong in Implementation

Kristofer Spinka
Herding Sheep
Published in
2 min readMay 1, 2019

--

The never ending debate over consolidated tape feeds should not be a debate on theory of usefulness, but rather a debate over physical realization. The solution is really quite simple, but for whatever reason regulators and private industry cannot seem to find their way to water.

What feed users want is very clear:

  1. A consolidated data stream
  2. No loss of fidelity (full market, order by order)
  3. No additional latency over native feeds
  4. No loss of beyond-the-CLOB data (auctions, imbalance data, etc.)
  5. Replay and recovery

Given this set of requirements, the solution should be beyond obvious: The consolidated tape feeds should quite literally just be a normalized feed standard emitted in parallel with the native feeds by the various venues themselves. Should a feed consumer wish to use some sort of aggregation network to receive a single-port solution, that’s on them to decide the latency centroid for a single delivery point, but for most participants consuming from the usual haunts, all venues should deliver to a few delegated “fabrics”; not unlike the way peering is done in the world of Internet engineering.

Assuming we can get past satisfying these basic requirements, not only do costs go down for all participants, but the margins for venues relying on tape revenues will go up.

Yes, there was a time when we needed dedicated networks (SFTI, etc.) and we needed a central processing agent to compute the NBBO and so on, but times have changed, hardware is fast and cheap and let’s not kid ourselves here, the NBBO is all a matter of perspective, there’s no reason why every TapeFabric (no TM don’t worry) node can’t process it locally and inject it back to local consumers. In fact, this would be a much more realistic view of the NBBO from the locally attached consumers point of view.

Beyond the basics, I would love to see a radio feed for the consolidated tapes as well, originating from the venue themselves. This sort of point-to-multipoint radio feed could either be consumed directly or by the various TapeFabric sites to cut down on latency and costs. After all, it’s the same data for everyone, so it’s a bit silly when you realize that we’re transmitting so many copies of it so many times over so many different strands of fiber.

That’s it, there’s really not much more to be said on the topic, the issue is perfectly clear: the fact that the physical implementation of consolidated tapes misses the mark by miles is because of the idea that the data should be backhauled from it’s emission point to a central processor and then disseminated from there.

Thoughts, corrections and arguments are always welcome.

--

--