
In this research, we developed a new deep neural network model to identify human action that was composed of an autoencoder and a pattern recognition neural network (PRNN). Our approach was divided into two parts: a system learning stage and an action recognition stage. In the system learning stage, first we secured human body outlines for each image frame, and combined the outlines to build an overlay of binary images to use as training data. Based on deep neural network learning, an autoencoder was trained to extract action features. Next, we used supervised learning to train a PRNN on the obtained features. Last, we combined the autoencoder with the PRNN to build a new deep neural network called the APRNN. Using fine tuning, the APRNN achieved optimal performance. In the action recognition stage of our approach, human action sequences were translated into binary overlay images, and the ARPNN was used to identify the actions. Test results showed our method had better performance than existing approaches.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 7 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
