SquidMat‎ > ‎SquidMat Help‎ > ‎

Differences between SquidMat and DECMAT

This section briefly explains the differences in how SquidMat (1) conducts the main analysis, (2) determines weights, and (3) performs a sensitivity analysis.


Changes in the Main Analysis Method


DECMAT employs two different types of matrices, each using its own method of analysis. The table below shows each method’s advantages and disadvantages.


DECMAT Type

Method Used

Advantages of that Method

Disadvantages of that Method

Relative-Values Matrix

Weighted Sums

(= multiplies each value by its column weight and adds those products across rows)

Easy to Understand

- Cannot mix apples & oranges = all values must be in the same metric

- Converting all measures to rankings (relative values) unnecessarily destroys the interval information in all your measures = it turns all your measures to garbage

Multiplication Matrix

Weighted Products

(= raises each value to the power of its column weight, then multiplies those exponentiated values across rows)

Can mix apples & oranges

(…and that includes rankings or relative values, by the way)

- Cannot take zeroes

- Cannot take an odd number of negative values in any one row

- Raises values to the power of the weight, often resulting in extreme values

- Converts more-is-better (MIB) measures to less-is-better (LIB) by inverting them (1/X), resulting in small values less than 1, which become microscopic after the weighting (exponentiation), resulting in other problems


Note that there are other, more technical criticisms of each method. But in training environments like ours, I believe we can safely ignore those and use decision matrices effectively to teach basic decision-support methodology. Also note that there are decent fixes for every one of the disadvantages expressed above—even the thoroughly flawed mandate to convert all measures to relative values (rankings) whenever you have at least one relative-values measure in your problem. Of course, my solution is to write a program that does it the way I would do it. Hence, SquidMat. SquidMat uses weighted sums, which eliminates my concerns with weighted products


Here is how we overcome the two disadvantages of weighted sums:


The Disadvantage

The Fix(es)

Cannot mix apples & oranges = all values must be in the same metric

SquidMat converts all values to the same metric (Z scores)

Converting all measures to rankings (relative values) unnecessarily destroys the interval information in all your measures

Use rankings when you have to, just don’t change all measures to rankings

Convert rankings to scaled values (see below) and use those


Using Z scores. This, I feel, has three advantages.


1. It avoids what is essentially our only remaining mathematical concern with weighted sums (that it requires everything to be in the same metric). This is not to say that others somewhere don’t have concerns about weighted sums. What I’m saying is that any remaining concerns about the mathematics of weighted sums have to do with things that are beyond the scope of what we’re doing (e.g., the criticism that the weighted-sums method is not very sophisticated).

2. It’s a basic tactic we often use in psychometrics (psychological and educational testing) to standardize and compare scores from different tests. But more to the point, it's easy for students to understand and explain.

Anyone who has taken a semester of statistics knows what a Z score is. For each EC column, SquidMat computes a mean (M) and standard deviation (S). Using those values, the program converts each raw score (X) from that column into a Z using this formula:

Z = (X - M)/S.

SquidMat then multiplies each Z by its column weight and adds them across each COA row.

3. It makes it very straightforward to convert an EC's values when it's LIB. Since Z scores are centered on zero, all you have to do to convert LIB values for an MIB analysis is change that column's Z scores' signs. That is, you simply multiply the Zs by -1. The inversion employed by DECMAT (1/X) gets very messy, resulting in values of 10-to-the-huge-power. (You know what I’m talking about. You’ve seen that note at the bottom right-hand corner of DECMAT.)


Using rankings. SquidMat allows you to mix rankings (relative values) with real values. Real values have two critical properties: order and interval. The order property allows you to say that one numerical value is greater than another (as opposed to simply labeling things, like 1=Democrat, 2=Republican, 3=Trekkie, etc.). The interval property allows for differences in the size of the gap between one value and the next.


For example, if A crosses the finish line 10 seconds ahead of B, but B crosses the finish line 1 second ahead of C, ranking them as A = 1, B = 2, and C = 3 preserves the order information but not the interval information. Reducing the data to ranks forces equal intervals between the values; there is no way to know that the gap between A and B was huge while that between B and C was relatively small.


As we know from doing DECMAT problems, there are some situations where we don't know the interval information, so we're forced to use rankings (relative values). My thinking, though, is that it's bad enough when we have to resort to rankings. The cure is not to do a Relative Values matrix where all values are robbed of their interval information. The better solution is to use rankings when you have to and limit the damage caused by this loss of interval information by allowing all those ECs whose values are real retain their superior interval properties. Don't make it worse by making them all rankings, for crying out loud.


An even better solution is to use your judgment to try and rate the rankings on a larger scale. For example, I have a situation where I don't know three people's weights, but I know that A is heavier than B and B is heavier than C. Instead of just ranking them from 1 to 3, I could try and rate them on a scale of 1 to 10. I could make A a 10 and C a 1 and try to place B between 1 and 10. If B is very, very close to C while being far away from A, I'd rate B as a 2 or 3. This technique restores some of the lost interval information.


Note that the actual scale values don't really matter. You can use 1 to 1000, 3 to 17 ½, -449 to 1 billion, etc., if you like. The whole thing will get rescaled when the values are changed to Z values.


Speaking of which…for those of you who know about a third property of good data values—that the data have a real zero, as opposed to an arbitrary one like the zero in temperature—it doesn't matter. The conversion to Z scores produces real zeroes. So that one’s automatically covered.



Changes in the Weighting


The next major change in the way SquidMat does business is to get rid of the pairwise comparison method for determining weights. I understand why DECMAT uses this approach, but I don't think it's really necessary. In fact, I believe it does more harm than good, particularly in a basic learning environment.


Briefly, without getting too technical, the pairwise comparison is part of a general decision-support methodology known as (the) Analytic Hierarchy Process (AHP, Dr. Thomas Saaty, formerly of the Wharton School of Business). Among other things, AHP is useful when you have potential relationships like this: Oklahoma beats Texas, Texas beats Kansas State, Kansas State beats Oklahoma. Who's the better team? AHP uses pairwise value judgments (and the accompanying eigenanalysis) to divine the best team. DECMAT uses AHP to determine weights.


Reservations with using AHP in this way. While I appreciate, even admire, the idea of using AHP to lend more objectivity to the problem of subjectively determining weights, I have three major reservations.


1. As I mentioned above, I believe that if it's important enough to grill the students on how the weights are determined, then they should actually be taught how it's done. It does no one any good to give them canned replies ("the weights are eigenvalues derived from the pairwise comparison matrix") that neither instructor nor student understands.


2. There is no reason why we shouldn't have perfect consistency for our weight priorities. It seems to me that when I say that one EC is more important than another, then that's that. It is more important. And if EC 1 is twice as important as EC 2, and EC 2 is equal to EC 3, then EC 1 will be twice as important as EC 3 as well. I don't see why that shouldn't be true for what we're doing. Using AHP forces us to be less certain about this.


3. Pairwise comparison makes perfect consistency virtually impossible, even when we’re absolutely certain about the orderIn DECMAT's pairwise comparison, we get 4 values to choose from when rating how much more important one EC is over another. (Saaty recommends a range of 1 to 9; DECMAT uses 1 to 4. Both ranges are too small to avoid inconsistency in most situations, even when we want perfect consistency.) Theoretically, you'd need a very large range to achieve perfect consistency. (I know that probably doesn't make much sense to you as is. I'd be happy to explain it in more detail for those who have the stomach for it.)


The range of importance values. A related concern that I have with the AHP scheme is that a rating of "this EC is slightly more important than the next EC" uses a value of 2. (Both Saaty and DECMAT use a value of 2.) If the scheme were perfectly consistent, you'd be saying that an EC that is slightly more important than the next is actually twice as important. In other words, under perfect consistency, AHP makes the weight of a slightly more important EC twice that of the other one. I personally don't feel comfortable with that assessment. "Twice as important" doesn't mean "slightly more" in my way of thinking.


The cure. Here's what I did to fix all that. The pairwise comparison, its eigenanalysis, and the consistency ratio are all gone. The user shuffles the ECs into the order he or she wants and then tells the program how much more important an EC is than the next EC. So each EC except the last one gets a single rating, i.e., how much more important it is than the next EC. The last EC automatically gets a weight of 1. There is no need to assess consistency, since I dogmatically enforce perfect consistency (i.e., all consistency ratios would be 1.00).


Here are the possible Importance values:

    1.00: same as next EC

    1.25: slightly more important than the next EC

    1.50: somewhat more important than the next EC

    1.75: much more important than the next EC

    2.00: considerably more important than the next EC


Note that these values are based entirely on my opinion that an EC should be no more than twice as important as the one it's standing next to. I say this because I've seen how easy it is for students to get carried away with them. For example, if I have four ECs, and I think that each one is considerably more important than the next, I get weights of 8, 4, 2, and 1. I think that's a decent spread. If I allow the max to be 4, I get weights of 64, 16, 4, and 1. I think that's a bit much.


Room for other opinions. Because weight determination is an essentially subjective undertaking, I don’t want to be entirely dogmatic about it. I've built in an override so that the user can bypass my scale and enter his or her own Importance values. For programming convenience, I limit the override value to 9.99. That is, the most any EC can be over the next EC is 9.99.



Changes in the Sensitivity Analysis


I don't do the "go up and down 3 and see if the answer changes" method used by DECMAT, but the idea is the same. With each EC, I set the weight to 1 and see if the answer changes. I then find each weight value where the answer would change and report what that weight is and what COA wins at that weight. This effectively gives the user all possibilities for the entire range of 1 to infinity.


(I don’t have the computer actually search through a series of possible weight values and test them, as DECMAT does. I derived a formula that helps find all the weight values at which two or more courses of action tie. I add 0.0001 to the weight and report it.)


Return to Contents          <<Previous Topic: Handling Relative Values