Artificial neural networks can be used effectively for a quite general class of problems. Still there exists no formal foundation of some important constructions used in the theory. In this paper an
attempt is undertaken to formalize some concepts of neuroinformatics and consider their properties from the point of view of applied universal algebra. It is proposed to treat neural networks as heterogeneous algebras which has made it possible to prove for them basic results similar to algebraic theorems on homomorphisms and congruences.