Because of the aforementioned automatic feature extraction.
In this case, the algorithm chooses itself what feature is relevant when making decisions. The problem is that those features are almost impossible to decript since they are often list of numbers.
I do exactly this kind of thing for my day job. In short: reading a syntactic description of an algorithm written in assembly language is not the equivalent of understanding what you’ve just read, which in turn is not the equivalent of having a concise and comprehensible logical representation of what you’ve just read, which in turn is not the equivalent of understanding the principles according to which the logical system thus described will behave when given various kinds of input.
This may be a dumb question, but why can’t you set the debugger on and step thru the program to see why it branches the way it does?
Because it doesn’t have branches, it has neurons - and A LOT of them.
Each of them is tuned by the input data, which is a long and expensive process.
At the end, you hope your model has noticed patterns and not doing stuff at random.
But all you see is just weights on countless neurons.
Not sure I’m describing it correctly though.
Because of the aforementioned
automatic feature extraction
. In this case, the algorithm chooses itself what feature is relevant when making decisions. The problem is that those features are almost impossible to decript since they are often list of numbers.I do exactly this kind of thing for my day job. In short: reading a syntactic description of an algorithm written in assembly language is not the equivalent of understanding what you’ve just read, which in turn is not the equivalent of having a concise and comprehensible logical representation of what you’ve just read, which in turn is not the equivalent of understanding the principles according to which the logical system thus described will behave when given various kinds of input.