|
Although there are differences in the algorithms used by the various processors, shouldn't these differences - in terms of the results they produce - reasonably be very small? If they are not small, shouldn't that issue be addressed rather than side-stepped by settling for cascading. Cascading may be better than nothing, but it only catches some of the lost customers, so surely it isn't a good substitute for tighter processing in the first place?
There are also issues of perception/reputation, cost, ease-of-use, but it surprizes me that there isn't a clearer view as to which is the best processor. The bigger sites in particular should be able to test which allows the most sales while trapping the maximum number of frauds. Yet their choice of processor suggests a mix of views, if actual views at all and not just habit.
With more objectivity, inefficient algorithms should cause the processors using them to lose business and their response to that pressure would reasonably be to tighten them up: to everyone's benefit. I suspect that the "safety net" of cascading is going to make people look at their processors even less critically than we appear to now.
|