Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ IEEE Accessarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IEEE Access
Article . 2020 . Peer-reviewed
License: CC BY
Data sources: Crossref
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IEEE Access
Article
License: CC BY NC ND
Data sources: UnpayWall
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IEEE Access
Article . 2020
Data sources: DOAJ
https://dx.doi.org/10.60692/83...
Other literature type . 2020
Data sources: Datacite
https://dx.doi.org/10.60692/1g...
Other literature type . 2020
Data sources: Datacite
versions View all 4 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

PID Controller Autotuning Design by a Deterministic Q-SLP Algorithm

تصميم الضبط التلقائي لوحدة التحكم PID بواسطة خوارزمية Q - SLP الحتمية
Authors: Jirapun Pongfai; Xiaojie Su; Huiyan Zhang; Wudhichai Assawinchaichote;

PID Controller Autotuning Design by a Deterministic Q-SLP Algorithm

Abstract

Le contrôleur proportionnel intégral et dérivé (PID) est largement appliqué dans de nombreuses applications. Cependant, trois paramètres doivent être correctement ajustés pour assurer une performance efficace du système de contrôle : le gain proportionnel ( $K_{P}$ ), le gain intégral ( $K_{I}$ ) et le gain dérivé ( $K_{D}$ ). Par conséquent, le but de cet article est d'optimiser et d'améliorer la stabilité, la convergence et les performances dans le réglage automatique du paramètre PID en utilisant un algorithme Q-SLP déterministe. La méthode proposée est une combinaison de l'algorithme SLP (Swarm Learning Process) et de l'algorithme Q-learning. L'algorithme Q-learning est appliqué pour optimiser la mise à jour pondérale de l'algorithme SLP sur la base de la nouvelle règle déterministe et de la stabilisation en boucle fermée du taux d'apprentissage. Pour valider l'optimisation globale de la règle déterministe, elle est prouvée sur la base de l'équation de Bellman, et la stabilité du processus d'apprentissage est prouvée par rapport au théorème de stabilité de Lyapunov. De plus, pour démontrer la supériorité de la performance et de la convergence dans le réglage automatique du paramètre PID, les résultats de simulation du procédé proposé sont comparés à ceux basés sur le système de contrôle de position central (CPC) en utilisant l'algorithme SLP traditionnel, l'algorithme d'optimisation des baleines (WOA) et l'optimisation améliorée des essaims de particules (IPSO). La comparaison montre que le procédé proposé peut fournir des résultats supérieurs à ceux des autres algorithmes en ce qui concerne à la fois les indices de performance et la convergence.

El controlador proporcional integral y derivado (PID) se aplica ampliamente en muchas aplicaciones. Sin embargo, se deben ajustar adecuadamente tres parámetros para garantizar el rendimiento efectivo del sistema de control: la ganancia proporcional ( $K_{P}$), la ganancia integral ( $K_{I}$) y la ganancia derivada ( $K_{D}$). Por lo tanto, el objetivo de este trabajo es optimizar y mejorar la estabilidad, la convergencia y el rendimiento en el ajuste automático del parámetro PID mediante el uso de un algoritmo Q-SLP determinista. El método propuesto es una combinación del algoritmo del proceso de aprendizaje Swarm (SLP) y el algoritmo Q-learning. El algoritmo Q-learning se aplica para optimizar la actualización de peso del algoritmo SLP en función de la nueva regla determinista y la estabilización en bucle cerrado de la tasa de aprendizaje. Para validar la optimización global de la regla determinista, se prueba con base en la ecuación de Bellman, y se prueba la estabilidad del proceso de aprendizaje con respecto al teorema de estabilidad de Lyapunov. Además, para demostrar la superioridad del rendimiento y la convergencia en el ajuste automático del parámetro PID, los resultados de la simulación del método propuesto se comparan con los basados en el sistema de control de posición central (CPC) utilizando el algoritmo SLP tradicional, el algoritmo de optimización de ballenas (WOA) y la optimización mejorada del enjambre de partículas (IPSO). La comparación muestra que el método propuesto puede proporcionar resultados superiores a los de los otros algoritmos con respecto tanto a los índices de rendimiento como a la convergencia.

The proportional integral and derivative (PID) controller is extensively applied in many applications. However, three parameters must be properly adjusted to ensure effective performance of the control system: the proportional gain ( $K_{P}$ ), integral gain ( $K_{I}$ ) and derivative gain ( $K_{D}$ ). Therefore, the aim of this paper is to optimize and improve the stability, convergence and performance in autotuning the PID parameter by using a deterministic Q-SLP algorithm. The proposed method is a combination of the swarm learning process (SLP) algorithm and Q-learning algorithm. The Q-learning algorithm is applied to optimize the weight updating of the SLP algorithm based on the new deterministic rule and closed-loop stabilization of the learning rate. To validate the global optimization of the deterministic rule, it is proven based on the Bellman equation, and the stability of the learning process is proven with respect to the Lyapunov stability theorem. Additionally, to demonstrate the superiority of the performance and convergence in autotuning the PID parameter, simulation results of the proposed method are compared with those based on the central position control (CPC) system using the traditional SLP algorithm, the whale optimization algorithm (WOA) and improved particle swarm optimization (IPSO). The comparison shows that the proposed method can provide results superior to those of the other algorithms with respect to both performance indices and convergence.

يتم تطبيق وحدة التحكم النسبية المتكاملة والمشتقة (PID) على نطاق واسع في العديد من التطبيقات. ومع ذلك، يجب تعديل ثلاثة معايير بشكل صحيح لضمان الأداء الفعال لنظام التحكم: المكسب النسبي ($K _{ P }$)، المكسب المتكامل ($K _{ I }$) والمكسب المشتق ($K _{ D }$). لذلك، فإن الهدف من هذه الورقة هو تحسين وتحسين الاستقرار والتقارب والأداء في الضبط التلقائي لمعلمة PID باستخدام خوارزمية Q - SLP الحتمية. الطريقة المقترحة هي مزيج من خوارزمية عملية تعلم السرب (SLP) وخوارزمية Q - learning. يتم تطبيق خوارزمية Q - learning لتحسين تحديث الوزن لخوارزمية SLP بناءً على القاعدة الحتمية الجديدة وتثبيت الحلقة المغلقة لمعدل التعلم. للتحقق من صحة التحسين العالمي للقاعدة الحتمية، تم إثباته بناءً على معادلة Bellman، وتم إثبات استقرار عملية التعلم فيما يتعلق بنظرية استقرار Lyapunov. بالإضافة إلى ذلك، لإثبات تفوق الأداء والتقارب في الضبط التلقائي لمعلمة PID، تتم مقارنة نتائج المحاكاة للطريقة المقترحة مع تلك التي تعتمد على نظام التحكم المركزي في الموقع (CPC) باستخدام خوارزمية SLP التقليدية، وخوارزمية تحسين الحوت (WOA) وتحسين سرب الجسيمات (IPSO). توضح المقارنة أن الطريقة المقترحة يمكن أن توفر نتائج متفوقة على نتائج الخوارزميات الأخرى فيما يتعلق بكل من مؤشرات الأداء والتقارب.

Related Organizations
Keywords

Optimal Control, Economics, Quantum mechanics, PID Controller, optimal control, Adaptive Control, Engineering, central position control system, swarm learning process algorithm, Machine learning, FOS: Mathematics, Controller Tuning, Stability (learning theory), Economic growth, Analysis and Design of Fractional Order Control Systems, Lyapunov function, Q-learning algorithm, Temperature control, Arithmetic, Control engineering, Physics, Extremum Seeking Control in Dynamic Systems, Computer science, TK1-9971, Algorithm, Autotuning gain, Computational Theory and Mathematics, Control and Systems Engineering, Notation, Physical Sciences, Adaptive Dynamic Programming for Optimal Control, Computer Science, Convergence (economics), Nonlinear system, PID controller, Electrical engineering. Electronics. Nuclear engineering, Mathematics, Nonlinear Control

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    12
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
12
Top 10%
Average
Top 10%
gold