
Fuzzy matrices play a crucial role in fuzzy logic and fuzzy systems. This paper investigates the problem of supervised learning fuzzy matrices through sample pairs of input–output fuzzy vectors, where the fuzzy matrix inference mechanism is based on the max–min composition method. We propose an optimization approach based on stochastic gradient descent (SGD), which defines an objective function by using the mean squared error and incorporates constraints on the matrix elements (ensuring they take values within the interval [0, 1]). To address the non-smoothness of the max–min composition rule, a modified smoothing function for max–min is employed, ensuring stability during optimization. The experimental results demonstrate that the proposed method achieves high learning accuracy and convergence across multiple randomly generated input–output vector samples.
fuzzy matrix, fuzzy set, stochastic gradient descent, QA1-939, supervised learning, decision making, Mathematics
fuzzy matrix, fuzzy set, stochastic gradient descent, QA1-939, supervised learning, decision making, Mathematics
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
