Wednesday, 30 July 2014

Human Overlords

Seems every week theres some sensationalist rant written on famous real (or digital) paper. So its refreshingly surprising to actually see a level headed written account of the current market dynamics... yes.. you can put the pitch forks down.

Its written by none other than OHara, one of the leading academics on market micro structure. She makes some great sound bites for example, HFT`s can be good and bad, shock horror I know... and that a (another) transactional tax will do nothing to reign in these evil computers.

The paper is primarily about how markets have fundamentally changed at the microstructure level, without the usual ranting on the usual negative results.

Whats changed is there`s a new layer of indirection when interacting with the market and/or interpreting market data thats based on a the venues pricing structure,  order/cancel/match/trade processes and speed. Its new because thats where the algos live (and die) and her point is the academic community is analyzing the algo`s activity like it was human activity, when infact they should be trying to decode what their human overlords programmed them to do.
All of this is old news for those of us in the trenches yet someone needs to crunch it down into a 1000 word article without the jargon and published in the New York Times perhaps with the title
"The old Stock Market is dead... Jim, it got smarter"

In anycase, here`s the link

Sunday, 22 June 2014

Synth, Place, Route for SW

Meeting timing in an fpga design can be painful at best. The pain is from the long wait times between "compiles" which by itself isnt so bad. What sucks is that (for myself atleast) you can not do anything else while its running (except rant on a blog!).

In theory you could go on and dick with the code while its running, and start multiple runs with different code/config yet in practice I`ve found multitasking like this usually ends in not knowing what you actually changed after all the compiles have finished.

As a software guy its pretty hard to imagine wtf im talking about. Timing ? Routing ? Mapping ? Placing ? wtf ? A more concrete version of a complete logic "compile" looks like the pic on the the left. This is a carry chain thats 1nsec overbudget, meaning its a = b + c; for a specific bit in an integer. The particular problem here is it takes 1nsec too long because its more than a simple addition and has some other funky stuff that gives the tools problems.

It got me thinking on how do you explain what modern EDA tools do to a software guy?

An answer is; imagine your system memory is no-longer deterministic. This means if you write to address 0x1000[0] = 1; theres no guarantee a read of 0x1000[0] == 1. The only rule to getting deterministic behavior of 0x1000[0] == 1, is if the read & write instructions are close to the memory data location. Meaning the location of the CPU`s instructions in memory, must be close to memory location 0x1000 for deterministic behaviour.

If that does not sound like a raving lunatic`s version of sudoku then you can start to imagine how a compiler would look. First part is generating the correct sequence of opcodes based on the verilog/vhdl input file. After that its an optimization problem of finding the best opcodes / memory location combination to get correct read/write memory behaviour - in a finite amount of time, a NP hard problem.

And finally if you believe all of that, then its easy to make a program that compiles no problem, but is impossible to work - thus the pain of timing closure.

... atleast in some psudo hand wavy sudoku analogy.