AAx Avoiding Technology Failure

50% of all "computerization" projects fail

Home
Home

News
News

Topic
Topics

AAx
AAx

We read about a few of the most spectacular failures in the newspaper, where millions have been lost in a project that simply didn't work, but most failures pass without notice. In small business, failure is most likely to be an inadequate return on investment, and the owner often doesn't even realize the system has failed.

Failure is almost always from inadequate planning or lack of sponsorship within top management. Planning failure may result from incomplete understanding of the problem, a system design too rigid to accommodate change, a miscalculated timeframe or budget, or from exceeding the expertise of available staff. Often failure is political in origin, a "solution" was selected first, then egos (and careers) became involved with the "solution", then the attempt to make the "solution" fit the problem.

Lack of buy-in and sponsorship within top management can kill even the best planned project. You have to have commitment from the top or you might as well not even start.

Small Business Failure

Small business has an additional unique failure mode. Owners spend freely on computers and other tangible hardware, but begrudge every dime spent on less tangible knowledge and software. The hardware is just an expensive distraction without knowledge, training, and carefully fit software. Again, the owner is often completely unaware the system has failed, only that the business should be doing better.

Rule: If hardware is over 50% of the cost of your "solution", you'd better take a long hard look at what you're doing.

Failure by Design

Attempting to build a single monolithic system to handle all your needs risks failure. Monolithic systems are very fragile because failure at any one point brings down the entire system, and the whole system has to be deployed at once. Better to build a modular, distributed system where individual modules can be deployed and upgraded as needed. The weakness of this approach is that integration among modules may not be perfect, and administering a distributed system is more difficult, but it is more likely to work well enough.


Failure on a Grand Scale - Client-Server Computing

A very few years ago, client-server was the darling of the industry, and companies poured hundreds of millions into this new, cost saving mode of computing. Today, client-server is considered dead.

The theory behind client-server seemed outstandingly reasonable: distribute computing power to thousands of (often pre-existing) networked PCs running a "client" program with an easy to use graphic user interface. Keep the back-end database processing on applications servers to keep network traffic under control. Use economical Intel based servers. Junk those expensive mainframe dinosaurs and cryptic Unix boxes, and replace their expensive priesthoods with lower paid MCSE certified PC support staff.

If anyone had actually been thinking, rather than trying to keep up with "what everyone is doing", they would have seen the fatal flaw. Client server did not collapse from a flaw in the concept, it failed from a flaw in implementation.

Client software was universally rolled out on Microsoft Windows, an unstable, network crippled, nearly unmanageable environment. Most damaging, there was no functional way to roll out changes to the individual workstations. When a change was made in business logic, a new version of the client program had to be installed on each and every one of thousands of individual Windows PCs, each one different from all the others.

Under the theory of "one operating system everywhere reduces costs", the server side was deployed on Microsoft Windows NT, an unstable, low performance, departmental grade server. "Server" soon came to mean "server farm". Windows support staff cost less than those Unix and mainframe gurus, but there were three or four times as many of them, plus managers.

On top of all this, every 9 months or so, Microsoft issues a new version of some major component of the overall system, often introducing incompatibilities. You couldn't both expand and stay with what you already had because all new equipment and software came with the new version, and Microsoft withdrew support of the older version. Once again, roll the new version out to each and every one of those PCs, then fix what broke.

System support costs soared until those "expensive dinosaur" mainframes looked like absolute bargains. The final result: IBM S390 mainframe ads featuring rampaging dinosaurs shredding a server farm, under the banner "Were Back! Stronger than ever!".

Today, client-server computing is being very successfully deployed using NCs (Network Computers), Java, and IBM's WSOD (WorkSpace On Demand). You never hear about the successes, though, because they fly directly in the face of Microsoft's mighty marketing machine. "Better", say the press, "just to say 'client-server is dead'" (and "NCs are dead", and "Java is dead" while you're at it).

Moral: think, plan, take your time. Use what will work, even if it's not what's most popular. If everyone else is doing something different, well everyone else could be wrong. Client-server shows that can happen. "If everyone else was jumping off a cliff, would you jump off too?" - Your Mom.



Back to DESIGN

©:Andrew Grygus - Automation Access - www.aaxnet.com - aax@aaxnet.com
Velocity Networks: Network Consulting Service - Internet Service Provider - Web Page Design and Hosting
All trademarks and trade names are recognized as property of their owners.