Connectionist representations are mappings between elements in the problem domain and vectors of activity values. Since cognitive problems do not specify how this representation is to be done, an important part of connectionist cognitive modeling is specifying such a representation. Complex cognitive processing requires representation of complex structured data. This can be done by decomposing structures into roles and fillers, and building up connectionist representations of structures from connectionist representations of roles and fillers, using the tensor product (conjunctive coding) to bind the patterns representing fillers to the patterns representing roles. Representations for roles and fillers are often constructed by viewing them too as structures, for example, structures whose roles are features of some kind. The patterns representing the irreducible elements, e.g., values for features, can be local or distributed, and can be defined by specifying response functions for the representational units. Local representations can be viewed as higher-level descriptions of lower-level distributed representations, raising the question of whether or not the higher-level description yields a model embodying the same kind of processing mechanisms that operate in the lower-level connectionist network. For linear networks, the answer is an exact "yes" ; for non-linear networks, the answer is "no," except to some degree of approximation. Network processes can easily produce activation patterns that cannot be exactly decoded into a single structure ; in such case the network has not produced a definite decision among alternative outputs. Such a decision can be forced within the model by adding winner-take-all circuitry, or the closest match can be calculated and taken to be the answer, or the output can be interpreted as probabilistic, with the most probable responses being those whose patterns are closest to the actual output pattern. The internal representations networks create for themselves in their hidden units can be analyzed by looking at individual hidden units and trying to find their receptive or projective fields : conditions on the input or output under which the hidden unit is active. Hidden representations can be studied more globally by analyzing the hidden activity vectors, or the contributions of hidden units, using statistical techniques such as hierarchical clustering or factor analysis. be interpreted as probabilistic, with the most probable responses being those whose patterns are closest to the actual output pattern. The internal representations networks create for themselves in their hidden units can be analyzed by looking at individual hidden units and trying to find their receptive or projective fields : conditions on the input or output under which the hidden unit is active. Hidden representations can be studied more globally by analyzing the hidden activity vectors, or the contributions of hidden units, using statistical techniques such as hierarchical clustering or factor analysis.