A Strategy for Implementing Systems

The choice of a strategy for systems implementation is one of those choices organizations tend to make subconsciously. The strategy seems to grow out of the' necessity to solve problems during the implementation There are two dangers to such an approach. The first is a failure to explore all the possible strategies at the company's disposal. By the time you are actually implementing, choices are going to be limited by previous decisions. The second is an inability to properly plan. How can you plan your implementation when you haven't decided how you intend to implement.

It may seem surprising but the failure to choose an implementation strategy is frequently the cause of poor implementation planning and failed implementations. It is very important to make this choice as early as possible. The implementation strategy you select will become the basis of your implementation plan. Your strategy defines how you are going to do things, your completed plan will tell you when things will be done and who will do them.

The process of choosing an implementation strategy can be broken down into three related questions: What will be the scope of the first implementation, What will be the method of cutover and what data processing design methodology will be used.

SCOPE. The question of scope has two approaches, which can be used exclusive of one another or can be combined. One approach to defining scope is to implement by function, such as implementing Bill of Materials first then Inventory. The other approach is by product, such as implementing only one product or a class of products at a time. In large companies it may be possible to further limit scope by plant, since product and plant boundaries may coincide.

These two approaches may be combined in large companies to reach a manageable scope. For example implementing Bill of Materials for only one product line.

In general the functional approach works best if you are implementing a modular system or a stand-alone single function system such as an Inventory package, which will be interfaced with other applications.

The Product based approach works best with the implementation of fully integrated systems, such as an MRP II package. It can be difficult to separate an integrated system into functional modules; this shouldn't be surprising since their 'integrated functionality is their selling point. If you do use the functional approach and are successful in dividing up an integrated package you will probably find it necessary to build temporary bridges between the new functions and existing systems. These bridges would be temporary, since they will not be needed once the additional function modules of the new system are implemented.

An example of this is bridging the inventory module to the current purchasing system in order to get purchased material receipts into inventory. Once the new purchasing module is implemented the bridge would no longer be needed.

The size of your company and the resources you have to dedicate to an implementation should help you decide the scope of your first implementation. If your company has multiple plants consider starting with one plant. If your resources are limited consider one line within a plant.

If your company is small you may decide to bring the whole plant up at once... be careful. The purpose of keeping the first implementation small is to minimize the pain involved in learning from mistakes. You can apply the knowledge you gain from your mistakes if you start off small. If you implement everything at once, you miss the opportunity to benefit from your mistakes, and the impact of those mistakes will be magnified by the large scope of your implementation.

METHOD OF CUTOVER.
There are two basic methods of cutover, which can be used. The first is to run in parallel. In parallel the user enters data into the old system as well as the new system until they are convinced of the accuracy of the new system.

The advantage of running in parallel is that it provides security against any major failure, which may have laid undiscovered in the new system. The disadvantage is the duplication of effort required to maintain both systems. There also tends to be less impetus for an ambivalent user to make the new system work while the old system is still functioning.

The alternative is a turnkey or "cold turkey" method of cutting over. As its name implies you select a date and from that day forward all work is done in the new system. Information or previous transactions in the old system can either be converted to the new system or can be left to die naturally as the remaining transactions are completed. During this period the user is shuttling between the new and old systems depending on the date at which the transaction was started.

The advantage to the turnkey method is that there are no additional resources required to do double entry. There is no double entry. The disadvantage is that you do not have the security of a fully functioning "old" system upon which to rely.

Many companies take out the insurance of a parallel cutover when they do not need it. If for example you are implementing an inventory system and you are planning a physical inventory you might want to do a turnkey cutover to the new system a month before physical inventory. This gives you a month to work out the bugs in the new system. Any errors in inventory balances created during this "shake down" period will be corrected with the application of physical inventory counts. If you fail to get the system running smoothly you can load your physical inventory balances into the old system and pull the new system. The point is to ask yourself what is the worst that can happen if you do not run in parallel and the system fails. Balance that answer against the resources required to implement in parallel and then make your decision.

DATA PROCESS DESIGN METHODOLOGY.
The question of what data process design methodology is far more complex then the previous questions. There are two basic methodologies, which can be pursued. Between these two methodologies are a myriad of variations.

The first basic methodology is the traditional life cycle approach of systems development. This approach follows a defined sequence: system requirements are defined, output and input formats are designed, from these formats the system's internal processing is defined, the system is coded and tested, procedures are written, the users are trained and finally data is converted as the new system is brought on-line.

The major problem with this approach is getting accurate requirements from the users. If your requirements are wrong you will not know until you bring the system on-line. At this point a substantial investment will have been made in producing the system.

It is however, typically difficult for users to specify a new system without a point of reference. The usual point of reference is the old system, which often provides an easy way to incorporate past mistakes into a new system. Additionally, there is a tendency for improved information to generate additional requirements. For this reason the implementation of a new system is often followed by an endless stream of enhancements.

The second basic methodology is prototyping. Prototyping became possible with the emergence of fourth generation languages. These "sophisticated" languages allowed analysts to build code faster and modify it with greater ease.

In prototyping a first cut system is produced from preliminary user requirements. This prototype system is loaded with a subset of actual data and processing is simulated for the user. Based on this presentation the prototype system is modified and another simulation is run and presented to the user. This cycle of simulation and modification is continued until the user's needs are met. In general this approach produces systems, which are closer to a user's needs than those developed with the traditional life cycle approach. This is due to the system's requirements being defined in reference to the new system rather than the old.

Two problems exist with this methodology. First, fourth generation languages typically require a great deal of hardware resources. This requirement has relegated prototyping to the development of small systems. Second, prototyping has lacked development standards. The constant changing of code in the prototype method discourages programmers from documenting their work. This has given prototype systems the reputation of being poorly documented and difficult to maintain. This does not have to be the case if documentation standards are enforced.

So far I have only discussed methodologies as they apply to custom coded systems. How do they apply to packages? Until recently packages were fit into the traditional life cycle methodology. In doing so, requirements were developed first, and then the package, which satisfied these requirements best, was selected. All requirements left unfulfilled by the package were satisfied by custom enhancements, which followed the rest of the cycle. As with custom systems, if your requirements were incorrect you would not find out until you brought the system on-line. Generally, the only benefit gained from using the package was to eliminate a large block of coding.

Recently proto typing has been applied to package implementation. When prototyping with a package, the package is selected by comparison with high-level requirements. Then the prototype is established by using the unmodified package. The data is entered, a simulation is run, and modifications are made with the cycle continuing until the designed system is reached. When prototyping with a package the package takes the place of the fourth generation language. Because with a package the core program is already done and because few packages are coded fourth generation languages, larger systems can be proto typed. However, strict documentation guidelines are still required to ensure a maintainable system. As mentioned previously these two basic methods, traditional life cycle and prototype, can be combined to form hybrid data processing design methods.

The benefits gained in getting closer to the user's needs far outweigh the risks of proto typing. However, if your data processing shop is too rigid to adapt a prototyping methodology or if you believe that they lack the discipline required to properly document prototype changes you may decide to stick to a straightforward life cycle method.

It is essential that your data processing department agrees with the data processing design approach taken and that this agreement is reached before work starts. Your data processing department's cooperation is essential if your implementation is to succeed.

CONSIDERATIONS IN SELECTING THE RIGHT APPROACH.

Now that we have identified the questions you face and their alternative answers, what are the right choices for your implementation?

There are several things to consider with each question. Your decision as to the scope of your initial implementation should be based on the type of software being used, the size of your company, and the resources you have to dedicate to the implementation.

Your choice of cutover method is dependent upon the resources you have and just how critical your data is.

As to what data processing method will be used to design or design modifications to the system, I have a preference for methodologies that use the prototyping cycle.

Systems implementations are not easy, and pre-planning is essential Making conscious decisions about how you are going to implement in advance will greatly improve your possibility of success. Answering the three questions, what will be the scope of the initial implementation, how will you cutover, and what will be the data processing design method, will enable you to arrive at a workable systems implementation strategy.