Powered by OpenAIRE graph
Found an issue? Give us feedback
OpenMETUarrow_drop_down
OpenMETU
Master thesis . 2021
License: CC BY NC ND
Data sources: OpenMETU
addClaim

Malware Detection Using Transformers-based Model GPT-2

Transformatör Tabanlı Model GPT-2 Kullanarak Zararlı Yazılım Tespiti
Authors: Şahin, Nazenin;

Malware Detection Using Transformers-based Model GPT-2

Abstract

Zararlı içeriğin çeşitliliği, karmaşıklığının yanı sıra Bilgi ve İletişim Teknolojilerinin (BİT) son kullanıcılarını önemli ölçüde etkilemiştir. Zararlı içeriğin etkisini azaltmak, kullanıcı sistemlerini zararlı yazılımlara karşı proaktif olarak savunmak için otomatikleştirilmiş makine öğrenme teknikleri geliştirildi. Dikkate dayalı derin öğrenme tekniklerinin bir kategorisi olan Transformers'ın, son zamanlarda, Doğal Dil İşleme (NLP) yöntemlerini kullanarak, çeşitli zararlı yazılım sorunlarını çözmede etkili olduğunu gösterilmiştir. Bu çalışmada, zararlı yazılımları otomatik olarak tespit etmek için bir Transformers mimarisinin kullanılmasını öneriyoruz. PE (Portable Executable) dosyaları üzerinde statik analizden elde edilen montaj kodları ile GPT-2'ye (Generative Pre-trained Transformer 2) dayalı modelleri besliyoruz. Hem zararlı hem de zararsız montaj kodlarının çeşitli özelliklerini yakalamak için önceden eğitilmiş bir model oluşturduk. Yakalanan bu özellikler modelinin tespit performansını iyileştirir. Ayrıca, mevcut kötü amaçlı ve zararsız kod parçalarını karakterize etmek için önceden işlenmiş özellikleri kullanan bir dil modeli oluşturduk. Böylece, ortaya çıkan dil modeli, yeni zararlı veya zararsız yazılımların derleme kodlarını tanıyarak bu kod parçaları arasında ayrım yapar. Ek olarak, daha iyi tespit doğruluğu elde etmek için GPT -2'nin önceden eğitilmiş modelini de kullandık. Deneyler, bizim önceden eğitilmiş modelimiz ve GPT-2'nin önceden eğitilmiş modeli ile ince ayar yapıldığında, tespit modelinin sırasıyla \%85,4 ve \%78,3'e doğruluk değerlerine ulaştığını göstermiştir.

The variety of malicious content, besides its complexity, has significantly impacted end-users of the Information and Communication Technologies (ICT). To mitigate the effect of malicious content, automated machine learning techniques have been developed to proactively defend the user systems against malware. Transformers, a category of attention-based deep learning techniques, have recently been shown to be effective in solving various malware problems by mainly employing Natural Language Processing (NLP) methods. In the present study, we propose a Transformers architecture to detect malicious software automatically. We present models based on GPT-2 (Generative Pre-trained Transformer 2), which performs assembly code obtained from a static analysis on PE (Portable Executable) files. We generated a pre-trained model to capture various characteristics of both malicious and benign assembly codes. That improves the model’s detection performance. Moreover, we created a binary classification model that used preprocessed features to characterize existing malicious and benign code pieces. The resulting binary classification model distinguishes between those code pieces by recognizing novel malware or benign assembly codes. Finally, we used GPT -2's pre-trained model to improve detection accuracy. The experiments showed that a fine-tuned pre-trained model and GPT-2's pre-trained model led to accuracy values up to 85.4\% and 78.3\%, respectively.

Country
Turkey
Related Organizations
Keywords

Transformers, Static Analysis, Malware Detection, GPT-2, NLP

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!