History of neural networks in the USSR

The 1960s became the golden age of Soviet science. By 1975, ¼ of the total number of scientists in the world worked in the USSR, while much attention was paid to the exact sciences, the fruits of which were often of applied value. Cybernetics, in which they saw great potential, were also not bypassed. Under the influence of the military and scientist Anatoly Kitov, she was rehabilitated after a short "disgrace". Work was underway in the field of automatic control, machine translation, network technologies ... Now we would say that there was a whole school of artificial intelligence in the USSR! Within the framework of cybernetics, the direction that we used to call neural network also developed. Jurgen Schmithuber, the creator of the famous neural network architecture LSTM, is also known today as a historian of deep learning, often refers in his speeches to the contribution of Soviet scientists to the formation of this direction.

In the 1960s, several serious thematic publications were published in large circulation in the USSR, and, judging by scientometric statistics, a fair share of the world's connectionist research was published in Russian. At some point, connectionism turned out to be so popular that it was taken up not only in the metropolis, but also in other cities and republics, for example, in Armenia and Georgia. Unfortunately, so far only a small fraction of publications of those years have been digitized; most of the works can be found only in offline libraries.

"There was a car from Tambov" or what McCarthy was doing in the USSR

In addition to Odessa, Kiev and Tbilisi, the delegation visited Baku, Moscow, Minsk, Leningrad and several other cities of the union republics. In Moscow, McCarthy met with his old acquaintance, academician Andrei Ershov. Colleagues met in December 1958 in the UK at the Conference on the Automation of Thought Processes. After a visit to Moscow, McCarthy, accompanied by Ershov, went to the Novosibirsk Academgorodok, from where he returned home through Moscow (in the realities of the Cold War, when Novosibirsk was one of the semi-closed scientific centers, Ershov had to work hard to coordinate this visit).

The relationship between Ershov and McCarthy, judging by their correspondence, was quite friendly, and not just professional. For example, McCarthy in one of his letters asks Ershov to send a recording of the song "There was a car from Tambov." Or here's another example: when McCarthy was visiting the Union, in the working group of the International Federation for Information Processing there was a conflict over the standards for the development of the language Algol 68. Then Niklaus Wirth broke away from the majority and began work on the Pascal language. McCarthy and Ershov composed and recorded a comic song in response to this, which they addressed to the "schismatics." McCarthy brought the tape to the next meeting of the working group. The work was performed, as the authors recalled, to the melody of the "Russian folk song" "It's not me, silly" (in fact, this means the song "It ain't me, babe" by Bob Dylan). The song had both English and Russian versions. Here is the chorus of the latter:

Give us a different language, So that there is no deception in it, So that any monkey can write in it ... No, no, no, this is not our language ...

Three years later, McCarthy once again came to Akademgorodok - now for two months and as an employee of the Computing Center: he gave a course on program verification at Novosibirsk University. During one of his trips, McCarthy met Alexander Kronrod, who was working on a chess program, the successor of which was the famous "Kaissa", and agreed to hold the world's first chess match between computer programs. In this match in 1967, the Soviet chess program developed at the Institute for Theoretical and Experimental Physics defeated the Stanford University program with a score of 3-1.

Alexey Ivakhnenko and "The method of group accounting of arguments"

Ivakhnenko's scientific interest in self-organizing systems manifested itself back in the 1950s: in 1959 he successfully assembled and tested his own version of the perceptron - the Alpha machine, apparently named after Rosenblatt's α-perceptron ... Since 1963, Ivakhnenko worked under the guidance of the famous academician Viktor Mikhailovich Glushkov. However, in the relations of scientists, not everything was smooth: in 1959 Glushkov wrote a letter to a colleague that in Ivakhnenko's book "an attempt was made to declare elementary self-adjusting systems to be higher cybernetic devices than computers, which are supposedly capable of implementing only rigid algorithms." It seems that Glushkov accused Ivakhnenko of wanting to "crush cybernetics." Although, judging by other evidence, the conflict was not as serious as it might seem. One of Ivakhnenko's employees, Mikhail Shlesinger, before working with the scientist was an employee of the Glushkov Institute, where he was engaged in nothing more than the simulation of neural networks on a digital electronic machine "Kiev"! Even after the transition of Ivakhnenko under the leadership of Glushkov, work on neural networks was not stopped. That is, despite the disagreements, the scientists continued to work together. Most likely, Glushkov was afraid that priority would be incorrectly given to the development of neurocomputers, with the help of which most problems, especially applied ones, could not be solved at that time. That is, he argued rather for the correct allocation of resources than for the termination of work on neural networks. By the way, the disagreements between Glushkov and Ivakhnenko concerned the actual and today opposition of the symbolic approach and connectionism. Representatives of the latter in the USSR were called supporters of the "non-deterministic" approach (in Ivakhnenko's terms, the "self-organization approach") as opposed to the "deterministic" symbolic approach. These disputes in the USSR, as well as in the West, were of a very fierce nature.

An important result obtained by Ivakhnenko was the creation and development of the Group Arguments Method (MGUA), one of the first deep learning algorithms in history. As for Yakov Tsypkin, for Ivakhnenko self-learning of the recognition system meant "the process of automatic, that is, passing without human intervention, establishing the boundary dividing the space of input signals and features into areas corresponding to separate images." Already in the early 1970s, Ivakhnenko and his colleagues managed to train eight-layer neural networks based on an artificial neuron based on the Kolmogorov - Gabor interpolation polynomial.

Some researchers in the West at about the same time or somewhat earlier Ivakhnenko trained networks with one intermediate layer. For example, this was done by Rosenblatt's colleagues Sam S. Viglione and Roger David Joseph, after whom the Joseph-Viglion algorithm was named. However, Ivakhnenko's networks, containing eight layers, were clearly ahead of their time. However, the very approaches used by him at MGUA and Viglion and Joseph vaguely resemble each other. The Joseph-Viglion algorithm generates and evaluates forward-propagated two-layer neural networks step by step, automatically identifying small subsets of features that provide a better classification of examples from the training set. The resulting networks are then validated (checked) on parts of the data not included in the training set. In MGDH, additional layers are added to the neural network at each step, trained using regression analysis (thus, MGHA goes back to the methods developed back in the 19th century in the works of Legendre and Gauss). Then the layer reduction procedure is applied. For this, the prediction accuracy of each of the neurons is estimated using a validation sample, and then the least accurate neurons are removed.

The book "Prediction of random processes", written by Ivakhnenko in collaboration with Valentin Lapa and published in 1969, became a kind of compendium of techniques studied by Soviet connectionists, and the book of 1971 "Systems of heuristic self-organization in technical cybernetics »Contains not only a detailed description of MGUA, but also many examples of its application for solving applied problems. In this book Ivakhnenko wrote:

We use cookies
We use cookies to ensure that we give you the best experience on our website. By using the website you agree to our use of cookies.
Allow cookies