Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

11 signs your IT project is doomed

Roger A. Grimes | May 7, 2013
The IT world is no stranger to projects that go down in flames. In fact, anyone who has had the unenviable pleasure of participating in a failed IT effort likely sensed its demise well before the go-live date. That sixth sense is invaluable in a competitive field like IT -- but only if it is acted on promptly and professionally.

Red flag No. 5: The project targets the minimum specs

Nothing kills project success like buying the bare-minimum hardware.

Vendors are notorious for trying to keep the cost of a project down by underselling the hardware necessary for optimum results. Vendors often offer a "bare minimum" spec and the recommended hardware buy. Smart project leaders will go beyond even the recommended hardware specs; if something happens, at least the vendors and your customers won't be pointing their fingers at penny-wise, pound-foolish hardware purchase decisions. Also, make sure all purchased hardware and software is compatible and tested with each other. You don't want either side pointing fingers early if something goes wrong.

Sometimes the devil is in the details when it comes to purchasing technology. For instance, if a vendor says it has great experience with MySQL running on Linux, be careful about requiring MySQL to run on Windows. The vendor may say they can do it, but may not have much experience in doing so. If you deviate from what the vendor recommends, make sure you get the vendor's acceptance in writing. Plus, it never hurts to check with past customers who shared a similar configuration to learn the good and bad of how things went if they deviated, even slightly, from the recommended specs.

Red flag No. 6: Testing is an afterthought

Testing is essential to project success. Whether it is unit testing (which tests one facet of the system) or integrated testing (which tests all components, including existing interfacing systems), testing should be done by current employees along with a testing script. The testing script should include all inputs, step by step, that testers should make. And you should detail, ahead of time, what all outputs should look like.

Testing data and processes should vet all scenarios, including good and bad data. Sometimes results from known bad data are more interesting than those of a desired outcome. Testing should include load tests to see how the system and network respond under heavy utilization. Testers should understand expected outcomes and be expected to report all deviations.

Red flag No. 7: No recovery plan is in place in the event of failure

Despite our best efforts, plans don't always go as expected. Every project leader needs to know what go-live success looks like -- and when it's time to admit failure and begin again another day. Every project should have a go-live backup plan in case failure becomes the only option.

Too many go-live events are driven by the belief that "nothing can go wrong." Leaders of these projects often fail to make sure good backups are taken and verified. They don't develop protocols for defining success, nor outline what a failure looks like ahead of time. They don't prepare the team for what to do in the event of a go-live crash. Many brand-new projects hit a fatal stumbling block only to find out they can't go backward. This is poor planning.


Previous Page  1  2  3  4  5  Next Page 

Sign up for MIS Asia eNewsletters.