Clarification for hw3 (These are suggestion or reference; nothing is required though recommended.) More Hint for Modeling (It is much simplified version than desirable, but it is sufficient for hw3. Those who have already implemented more complicatedly do not have to follow this hint.) Prestep: 1. Have AIP output layer(Y1) and RX output layer(Y2) as input to WTA hidden(WH). (Fully connect Y1-WH and Y2-WH separately.) 2. Fully connect WH to G(WTA output). 3. Compare G to its target output to generate backprop error for G-WH-(Y1,Y2) (You may implement WTA non-neurally as well, but then you have to figure out by yourself how to make the backprop works please look at C4 below.) Step A: 1. Have one input layer(P1) and one hidden layer(P2) and separate output layers for IT(Z) and AIP(Y1). 2. Have P1 to P2 fully connected, so for P2 to Z, and P2 to Y1. 3. Compare Z to IT target output to generate backprop error for Z-P2-P1. 4. Compare Y1 to its target output to generate backprop error for Y1-P2-P1. 5. Compare weights for P1 to P2, those for P2 to Z and those for P2 to Y1 and answer Q(a). (G may come out right, but has nothing to do with the question at this point.) Step B: 1. Restore Y2 to WH connection. 2. Have Z as input to RX. (link Z to RX hidden layer(RXH) fully). 3. Have RXH and Y2(RX output) fully connected. 4. Compare Y2 to trained Y1 to generate backprop error for Y2-RXH-Z. (no further backprop for Z to P2 at this point; Y2-WH-G weights will change and must converge.) 5. Have Y1 to WH connection all 0.(Lesion it.) 6. Run simulation and answer Q(b). Step C: 1. Restore Y1-WH connection. 2. Compare G to its target output to generate backprop errors and let them back-propagate through G-WH-(Y1,Y2). 3. Run simulation and answer Q(c). (Compare weights for Y1-WH and those for Y2-WH). 4. Here you may have to go for full backprop from G to X if your WTA is not backprop module. Do it after weights are set from A and B to make your training shorter. Have errors of G propagate all the way back to X. No indenepent errors(as mentioned above) will be considered. After training lesion one brach at a time to test whether the simulation works, and explain the outcome. Step D: 1. Do C with smaller number of combination (less diverse patterns). 2. Answer Q(d). (Make combination such that only one branch is enough to answer correctly, then see what happens to the weights of the other branch.) Example of Coarse Coding X(visual input) : { //shape: cylinder // generic shape one-end sharpened cylinder // for pencil lipstick cylinder with axial hole // for spool cylinder with opened top // for beer glass //color: white yellow black red //width: small medium big //length: short medium tall } Z(ID of objects) { lipstick pencil spool of thread beer glass cylinder } Y1, Y2, G(grasp preferences aperture) { precision pinch // as for pencil side grasp // as for beer glass and some cylinders power grasp // some cylinders spool as for sizes // grabbable into a hand. narrow middle wide } Example matches for X Z Y and G long pencil --> thumb-index pinch with narrow opening X { .1 1 .1 .1 0 0 1 0 1 .1 0 0 .1 1 } // long pencil Z { .3 1 0 0 .2 } // most likely pencil, possibly lipstick and cylinder Y { 1 .1 .1 1 .2 0 } G { 1 0 0 1 .2 0 } spool --> power grasp into five closing fingers and palm of one hand X { .1 .2 1 .1 1 0 0 0 .2 1 .3 .3 1 .2 } Z { .1 .1 1 .2 .4 } Y { .2 .3 1 .3 1 .2 } G { 0 0 1 .3 1 0 } tall cylinder of middle thickness --> side grasp X { 1 .1 .1 .1 1 0 0 0 .2 1 .3 .1 .3 1 } Z { .1 .1 .1 .3 1 } Y { .1 1 .3 .2 1 .3 } G { 0 1 0 0 1 .3 } Hint for Training Take a look Backprop model and see how input and output are connected through the model. Lengthen the input and output according to what you need for this homework.