CS 564 Homework: Self-Organization of the FARS Visual System

Due November 26


There is an optional reference power point file that you may want to take a look at here.
To encourage you to use the new web page you may reference a backpropagation model presented in it.
You are required to use a new NSL version for this assignment. Please start early just in case you have compilation error.


Include graphs, tables, etc printed out from NSL or manually organized to illustrate your answer as well as to prove correctness of your answer.

Assume the following wiring for a simple visual system.

X: is the visual input [shape, color, width, height]

Z: is the identity of the object

Y1: is AIP's grasp preference [side, power, precision, with aperture coding]

Y2: is RX's grasp preference [side, power, precision, with aperture coding]

G: is the final grasp command [side, power, precision, with aperture coding]

WTA: is winner take all network (can be implemented non-neurally as well)

RX: is an unspecified region to be implemented as a backprop network similar to IT and AIP.

Fars Visual Model for homework ********************************************************************************

The network is going to receive a hand-coded coarse coding for the following objects: cylinders of various lengths and widths, colors, a lipstick with color red, a pencil sharpened at one end (thus of one width but different lengths) and different colors, a spool of thread, and a one-pint beer glass (no handle) with color yellow (make sure that you have many training patterns, at least more than 25 combinations). The network is to be trained so that the output z encodes "what" the object is, while the output at y1 encodes "how" to grasp the object (so we want the output to encode 2 affordances - type of grasp and aperture required), so you must also design an encoding for AIP and IT. Each IT, AIP and RX are one hidden layer backpropagation neural networks.

a) [2 pts] Train AIP and IT with no learning in RX and Y2 not in place (take Y2 input to WTA as zero). Do you see any segregation between units with stronger connections to AIP than IT or vice versa? If so, can you describe the features encoded by some of these units?

b) [3 pts] With the network trained in (a), add the RX-->WTA link connections and then train RX using the output of IT (z) as the input and the output of AIP (y1) as the target output by backpropagation.
After training, lesion the AIP (e.g. AIP-->WTA) and report on how well the whole system responds to the inputs.

c) [3 pts] Combine (a) and (b) i.e., see what happens when all three regions are trained at the same time.

d) [2 pts] Do c) with much smaller number of combinations to see what happens. Do you see the model not trained? Explain why.

Get the Backprop model code here!