In this article, we present the mathematical foundations of generative machine intelligence and link them with mean-field-type game theory. The key interaction mechanism is self-attention, which exhibits aggregative properties similar to those found in mean-field-type game theory. It is not necessary to have an infinite number of neural units to handle mean-field-type terms. For instance, the variance reduction in error within generative machine intelligence is a mean-field-type problem and does not involve an infinite number of decision-makers. Based on this insight, we construct mean-field-type transformers that operate on data that are not necessarily identically distributed and evolve over several layers using mean-field-type transition kernels. We demonstrate that the outcomes of these mean-field-type transformers correspond exactly to the mean-field-type equilibria of a hierarchical mean-field-type game. Due to the non-convexity of the operators’ composition, gradient-based methods alone are insufficient. To distinguish a global minimum from other extrema—such as local minima, local maxima, global maxima, and saddle points—alternative methods that exploit hidden convexities of anti-derivatives of activation functions are required. We also discuss the integration of blockchain technologies into machine intelligence, facilitating an incentive design loop for all contributors and enabling blockchain token economics for each system participant. This feature is especially relevant to ensuring the integrity of factual data, legislative information, medical records, and scientifically published references that should remain immutable after the application of generative machine intelligence.
Read full abstract