It is a less than ideal state for production software and it would seem to me that it is the result of inadequate testing - something that is, quite simply, inexcusable.
This is particularly true in this day and age.
The general approach for testing consists of various levels - function, unit, system, integration and acceptance to name a few - and within each the idea is for at least one example of a scenario to test out each and every part of the code. Testing should also include as many 'twists' as can be identified to make sure unusual situations are handled correctly. As testing progresses, it is usual for problems to be found and additional scenarios to be identified. The problems are then fixed and the new scenarios added to the testing for the next round.
For a large system - like eBay - this will translate into a rather large exercise ... where the image of a million monkeys on a million keyboards finds a somewhat appropriate setting. However, there are alternatives.
Automated testing software has been around for a while and the testing process can be built up and run repeatedly, with minimum human effort and the added benefit that repeat tests will be identical. You can even set up expected results and have the automation software monitor the success of a testing cycle, making it even easier to run repeat tests.
BUT, do not be deceived, setting up such a test bed is not a trivial exercise and maintenance of it is important. However, once properly done, the completeness of testing that is offered for every subsequent test cycle is unequalled. There will always be some elements of testing that will still require human involvement, but the bulk of the boring, repetitive and time-consuming effort will be performed without cursing and swearing and coffee breaks.
EBay and its systems are big enough to warrant the effort of automated testing, but considering some of the problems that have been reported, at times one has to wonder what controls they actually have in place for promotion of software changes into the production environment.
Nevertheless, I must concede that testing - no matter how thorough - very rarely results in robust, functional software, especially the first time around. The old adage that 'Fact is stranger than fiction' is universally applicable to any and all areas of computer systems. No matter how many twists and turns or scenarios of double backward somersault with face plant that can be imagined by a testing team, real people in the real world will come up with some doozies. The sign of a well-written piece of software is that even these are handled in a
controlled manner. They may not be actioned correctly, but they won't stuff something up, disappear into the ether or crash the system. Nevertheless, these things are bound to happen and there will be the need to be some changes, but with thorough testing, the negative impact of changes should be minimal.
Even so, there are some things I have heard about that, from my experience, simply should not happen.
Still, considering the size and complexity of eBay's systems, there is a commendable degree of competence - but for all those who put in the hard yards with conscientious diligence, to have some silly problems in a high profile corner of the eBay systems scene bring bucketloads of faecal material down upon them is unfair - but **it happens.
The problems with eBay Apps is much like going on a cruise and having your toast burnt every time you turn up for breakfast. It's just one small thing - but it ruins the 'experience'.