Training and usage of neural-networks is sensible for problem cases with an unknown or fuzzy domain. Consequently the complexity and the size of the belonging problem is likewise unknown. However it may be helpful to estimate the error-rate for non verifiable results.

#### Artificial Intelligence described as class inside computational complexity theory

It is necessary to identify AI as complexity class to obtain clues about its performance. Obviously neural-networks and other machine learning techniques can be run on Turing Machines and at least for some types of NNs it can be shown that they are Turing complete.

Otherwise we know that the results of NNs have heuristic characteristics. They can be both false-positive and false-negative. I also assume that they always *stop* and return a result.

The closest match I could find inside the class of search problems are Monte Carlo algorithms. The corresponding complexity class for decidable randomized problems is **BPP**.

Under these assumptions it is now possible to calculate a two dimensional map to predict the error rate (between 0 = infrared and 1 = ultraviolet). The X-Axis thereby defines the domain size and the Y-Axis the problem complexity:

It is also possible to inspect the result, if the error-rate does not grow linearly with the domain size..

or with the complexity: