Specialists from Lux Research highlighted five problems for deep teaching machines.
Permanent access to data arrays is needed, on the analysis of which will be trained. Although it is not a problem for consumer applications that have the ability to independently extract large amounts of data, in most cases the commands do not have their own base for forming the basis of development.
The reason why deep learning is capable of showing good results is the availability of a large number of interrelated neurons (or free parameters) that allow you to capture thin data nuances. However, it also means an increase in the complexity of identification of other parameters, the values of which must be fixed before starting work. There is a danger of misuse of data transformation, which can lead to self-re-erection.
Next learning networks, although they are powerful enough, their work is braked due to the huge size. In addition, deep learning networks require a lot of time for learning, which complicates the possibility of implementing the retraining and making any adjustments.
Because of the huge number of levels, nodes and compounds, it is difficult to understand how deep learning networks reach consensus. Understanding the decision-making process becomes very important in applications serving people in which the outcome result depends on the correctness of the system.
Deep learning networks are very susceptible to the «Butterfly Effect» — small changes in the input data can lead to fundamental changes in the results, which makes them in essence unstable.
Earlier we wrote