The network consists of three input layers and an intermediate layer.
The three input layers – an eye-centered layer, an eye position layer
and a head-centered layer – are also output layers; the final
estimates of the network are read from these layers after relaxation.
The three input layers consist of three topographic layers of N units
indexed by their position,
The input layers are symmetrically interconnected with the intermediate
layer (hidden layer), and the corresponding matrices of connection
weights are denoted by W
The variable s
R (t), and R_{ej}(t)
are denoted as the activity of unit _{ak}i
(j,k) in the eye-centered,
eye-position and head-centered layer at time t.
For eye-centered position, the probability distribution for the initial
activity, denoted R
The expressions for P(R )
and P(R_{e}(0)|x_{ak}) are identical to P(R_{a}(0)|x_{ri}),
except that _{r}r is replaced by e
or a. The expressions for f (x)
and f (x_{a}) are
identical to fi(x_{e})
except that _{r}r is replaced by a or e.
The activity in the intermediate layer, A (0)
= 0 for all _{lm}l,m.
The evolution of the activities in the recurrent network is described by
a set of coupled nonlinear equations. Denoting A l,m)
in the intermediate layer at time t, the evolution equations are written
where
L or R_{lm}, R_{ri},R_{ej}
implement a quadratic nonlinearity coupled with a divisive
normalization._{ak}
K
= 20 Hz, n
= 1 Hz, s
= 0.40 radians, K s =C_{r}=C_{e}=1 s._{a} |