FAQ - Datapaths
Q1: How do logical units work?
Q2: Is there more than one correct microcode solution?
Q3: How do I unpacking fields in microcode?
Q4: How are data shift amounts specified?
Q5: What's the difference between a logical and arithmetic shift?
Q6: How do I determine the length of an instruction?
Q7: How do I unpack fields from a value?
Q8: When are immediate values sign extended?
Q1:
When it comes to using the logical unit, I don't understand how to
manipulate the values of X and Y, depending on the logical function,
to determine what value gets passed on to the Z bus. The only two
functions I understand is if the function is X, the X value gets
passed on and if the function is Y, the Y value gets passed on. Can
you please explain to me what effect the other important functions
have on the X and Y values.
The logical unit performs word-wide bit-wise logical operations on
values from the X and Y busses. For example, if one performs the AND
operation, Xi and Yi are ANDed to product Zi for all bit positions i
from 0 to 31. The logical function is specified as a 4 bit value (LF).
Each bit of LF determines the function output for a different
combination of X and Y values. The table printed on the datapath
sheets shows how this is defined.
Q2:
When I do the microcode problems, I hardly can exactly the same
answers as you provided(the order of the steps, and the utilization of
different registers); however, my answers do make sense to me. I think
there should be different ways of doing this kind of problems. If this
is case, how can I make sure the answers I got are correct.
Like all programming problems, there are often more than one correct
ways to accomplish a task. While most of the old exam problems and
study problems constrain the possible solutions, one can still find
some small variations on instruction order, register usage, etc. The
only way to test correctness is to carefully examine both the provided
solution and proposed answer and look for differences that could
violate the objectives stated in the problem.
Q3:
Could you please explain the purpose of masking as was done in
exam three for Spring 2001, the test that employed Microcode Reverse
Engineering? Why is it necessary to mask all but the least
significant 10 bits? I was able to write all of the operations but I
am still quite unsure as to what the Microcode is doing? Could you
please briefly describe what the microcode is doing and it's purpose?
Often values used in a computation are smaller than the word size of a
processor. To save memory, several of these values are "packed" into a
single word. When this is done, the values must be "unpacked" (e.g.,
separated and positioned properly within a register) before they can
be operated on by datapath units. The shift operation moves a given
packed value so that the least significant bit of the value is also
the least significant bit of the register (i.e., word). The mask
operation (bit-wise AND) then clears the bits of any other values that
still exist in the register. The operation employs a mask word that
contains ones where bits should be kept and zeros where bits should be
cleared (masked). For this example, the bit mask looks like this:
00000000000000000000001111111111
This mask will clear all bits except for the least significant ten
bits which will be unchanged.
Q4:
I have a question on the shifting instructions. What does it mean to
have a hexadecimal shift amount of 0xFFFC? What does the 0 stand for?
The prefix "0x" means the value that follows is in hexadecimal
notation. When shifting, the shift amount is sometimes specified in
hexadecimal. The lest significant six bits form a two's complement
number (-31 - +31). The sign specifies the shift direction: + means
to the right, - means to the left. Note that all other bits in an
immediate word are ignored.
Q5:
So I'm confused about why someone would use the shift right arithmetic
versus that shift right logical to divide by 2^x. For instance in the
review packet on problem IN-2 part B why did they use sra?
For right shifts, logical and arithmetic shifts differ in that logical
shifts just move bits around, inserted zeros at the ends. Arithmetic
shifts assume the bit represent a two's complement value that might be
negative. Hense the most significant bit is copied to preserve the
sign of the value. Left shifts of both types always shift in zeros.
Q6:
In Fall 2000 final exam problem 1, how do you calculate the
number of opcodes? Also, how do you determine the number of bits
needed to specify a branch if equal instruction in the format?
The opcode field has 12 bits; 2^12 = 4096. BEQ is an I-format
instruction: 12 + 9 + 9 + 18 = 48
Q7:
In Fall 1999 final exam microcode problem, we have to add two 16 bit
values. I first logically shifted 16 bits to the right, followed by a
16 bit rotate shift and then another 16 bit shift to the right. Seems
to work too? Is that a correct way to get the 2 values? Also, I do
not understand the solution given. First the value is logically
shifted 16 bits and stored in R3. Then R2 is ANDed with
FFFF. Shouldn't that be 00FF, or is there something that I am missing?
There are many ways to unpack values. The most common is the shift and
mask technique. However, one can also employ this rotate and shift
approach. When specified correctly, both techniques are equally
correct and efficient. Sixteen ones is four hexadecimal characters
(four Fs).
Q8:
In Fall 1999 final exam problem 5, for step 4, we are trying to and R2
with FFFF to get the lower 16 bits, but if we use FFFF as the
immediate value, after it's been sign extended, it will became FFFF
FFFF, it's not?
The sign extend unit only sign extends for arithmetic operations (not
logical operations).