Ever since the explosion of the internet, fake news has always been a cause for concern. The proliferation of fake news online hinders access to reliable information. The efficiency of several machine learning methods for the identification of fake news is investigated in this work. We train and evaluate five models: Support Vector Machine (SVM), Logistic Regression, Random Forest, Long Short-Term Memory (LSTM), and Naive Bayes. Employing two distinct datasets, we evaluate the models' generalizability. We extract textual features from the news articles and assess their performance using established metrics. This investigation sheds light on the advantages and limitations of each model within the context of fake news classification, contributing to the development of more robust detection systems. Furthermore, we explore the impact of utilizing different machine learning paradigms, including supervised learning (Logistic Regression, Random Forest, SVM) and deep learning (LSTM) on the detection accuracy. This comparative analysis provides valuable insights into the optimal approach for tackling the intricate challenge of fake news identification.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3532::5256bb39dbe6c7d8d350724defd5af2a&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3532::5256bb39dbe6c7d8d350724defd5af2a&type=result"></script>');
-->
</script>
In the present era, artificial intelligence (AI) technology is becoming more and more mature and has gradually begun to be used in various fields, so in the field of corporate training, there exists a new conception of Large Language Model (LLM) chatbots, this paper take OTTO chatbot as an example to explore about how LLM chatbots can be applied in corporate training and what changes it may bring, by exploring the chatbot's evolution process and the possible application scenarios of the bot, the importance of LLM and the role it plays in training can be learned. During the research process, this paper explore how to select data for training, how to prepare the data and the training methods of LLM, as well as introduce the techniques and strategies needed to train chatbots with their own anticipation library. The research elaborate on the core principles of session design, thinking about how to make communication more effective, how to enhance the user experience, and how to make sessions more flexible, while in terms of practical experiments, this paper analyzes in depth how OTTO can be applied to corporate training, fine-tuning the BLOOM model using the LLaMA-Factory framework and the SQuAD dataset, and recording in detail the fine-tuning process, the loss value and the learning rate of the changes, while the results after the experiment show that running LLM chatbots can really improve the interactivity and learning effect in corporate training.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3532::c59827ea559669691b628f406db1e64d&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3532::c59827ea559669691b628f406db1e64d&type=result"></script>');
-->
</script>
Tämä opinnäytetyö tutkii, miten kehittyneet tekoälyteknologiat voivat muuttaa ohjelmistoalaa, keskittyen erityisesti työkaluihin, jotka tehostavat koodausta ja joilla on kyky ymmärtää luonnollista kieltä. Opinnäytetyössä selvitetään mm. kuinka tekoälyn tarjoamia mahdollisuuksia käytetään ohjelmistoalalla, mitä haasteita niiden käytössä kohdataan ja minkälaista arvoa ne tuottavat sekä käyttäjille että organisaatioille. Opinnäytetyössä hyödynnetään kyselytutkimusta, johon vastasi 37 ohjelmistoalan ammattilaista. Kyselytutkimuksen kautta analysoidaan ohjelmistoalan ammattilaisten todellisia tekoälyn käyttötapauksia, tunnistetaan käyttöönoton esteitä (esimerkiksi eettiset kysymykset, osaamisvajeet) ja mitataan syntyneitä hyötyjä (arvoa) sekä kehittäjille (esimerkiksi lisääntynyt tuottavuus) että organisaatioille (esim. kilpailuetu). Tutkimustulokset osoittavat, että tekoäly voi merkittävästi lisätä tuottavuutta, parantaa kehittäjäkokemusta, vauhdittaa innovaatioita ja tarjota kilpailuetua. Tutkimus tunnistaa kuitenkin myös mahdollisia haasteita, kuten erilaiset eettiset huolet, osaamistarpeet, tietosuojakysymykset ja integrointihaasteet. Opinnäytetyö sijoittaa nämä käytännön havainnot teoreettiseen viitekehykseen hyödyntämällä olemassa olevia teorioita arvon luomisesta ohjelmistotuotannossa. Tämä analyysi korostaa AI- työkalujen ainutlaatuista vaikutusta arvon luomiseen sekä kehittäjille että heitä työllistäville organisaatioille. Tämä opinnäytetyö edistää käynnissä olevaa keskustelua tekoälyn roolista ohjelmistotuotannossa yhdistämällä teoreettisia näkökulmia ja tekoälyn käytännön sovelluksia. Se valotta tekoälyn monipuolista roolia kehitystyön tehostamisessa ja ihmisen ja tietokoneen välisessä vuorovaikutuksessa. Tämän opinnäytetyö tutkimus luo pohjan tuleville tutkimuksille kehittyvistä AI-työkaluista ja niiden vaikutuksesta ohjelmistotuotannon kehitykseen. This Master's thesis investigates the transformative potential of Artificial Intelligence (AI) integration within the software development industry. Focusing on tools that enhance coding efficiency and have natural language processing (NLP) capabilities, the thesis research explores AI adoption patterns, challenges, and value creation associated with AI. Utilising a survey of 37 software professionals, the thesis analyses real-world application scenarios, identifies barriers to adoption (e.g., ethical considerations, skills gaps), and quantifies benefits for both developers (e.g., increased productivity) and organisations (e.g., competitive advantages). The findings reveal AI's significant role in boosting productivity, developer experience, accelerating software development lifecycles, and creating a competitive edge for businesses. However, the thesis research acknowledges potential hurdles like ethical concerns, the need for specialised skillsets, data privacy concerns, and integration complexities. By drawing on established theories of value creation in the context of software development, the thesis situates these practical findings within a theoretical framework. This analysis highlights the unique impact of AI tools on value creation for both development professionals and the employing organisations. This research contributes to the ongoing dialogue by bridging the gap between theoretical perspectives and real-world applications of AI in software development. It sheds light on the multifaceted role of AI in enhancing development operations and human-computer interaction. Ultimately, this thesis empowers the software development community to make informed decisions about AI adoption and leverage its potential to drive innovation and competitive advantage.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______1319::234a593c5dd2b77f67c4aa97002513ee&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______1319::234a593c5dd2b77f67c4aa97002513ee&type=result"></script>');
-->
</script>
handle: 2299/27393
© 2023 Published by Elsevier B.V. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.datak.2023.102259 Peer reviewed
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=2299/27393&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=2299/27393&type=result"></script>');
-->
</script>
“Undulation of a Rupture” emerges out of the enigmatic task of touching the past in the present via the landscapes of the American South. Driven by a profound yearning to trace, locate and connect with ancestral origins I took to the land to unearth impressions of these leavings. Expressed expansively through the multiple registers of: site, body and materiality these findings offer new articulations of being and relating. The land presents itself as a stage engendering the enmeshment of body, earth and ancestral memory. The body, punctuated through form, texture and movement, is wielded as a medium acting as a bridge between what is known and what is felt at the edges of visibility. The mercurial nature of Blackness arises as a complex nexus, intersecting as both a geographical site and tangible material made manifold photographically.Through these transpositions, I propose new hybrids of Black female subjectivity within a visual framework of abstraction. These renderings offer an ever-evolving exploration and inquiry on the trans-migrational body.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::237f4226b47324fe37228145a8afec5b&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::237f4226b47324fe37228145a8afec5b&type=result"></script>');
-->
</script>
Cloud computing is an integral part of today's world: it primarily enables individuals and enterprises to provision and manage resources such as compute, storage, etc., for their needs with the click of a button. Modular approach to software development enabled cloud providers to rapidly evolve and deliver increasing number of services to users rendering clouds mission-critical. To insure prompt serviceability of this Achilles’ Heel from facing incidents, cloud providers employ significant human resources. However, with the ever increasing number of services offered by clouds and growing types of workloads such as the proliferation of Machine Learning workloads in recent times, it is no longer viable for cloud providers to scale their human resources at this pace to insure prompt serviceability of their clouds.In this dissertation, I present my work towards improving the serviceability of clouds by leveraging insights from my experience with real debugging workflows employed at the three largest clouds today. I present techniques from Machine Learning and Natural Language Processing to leverage the vast amount of historical debugging data in clouds to develop tools that provide assistance to their engineers. I present a 'Coarsening' framework that enables transition towards a centralized debugging plane and discuss practical evaluations of tools built using this framework.I present Revelio, a tool that can generate debugging queries for engineers to execute over system-wide logged data, whose results can likely hint them of the root cause of an incident. To enable benchmarking many techniques, I also built a distributed systems debugging testbed that can inject faults into services, interface with human users and collect execution logs across the system. I present AutoARTS, a tool that can tag a lengthy postmortem report of an incident in the cloud with all root causes from an extensive taxonomy and can also highlight key pieces of information from a postmortem for ease of analysis. I present PerfRCA, a tool that can scale causal discovery to production-scale telemetry to reason performance degradations. I conclude with my vision for a centralized approach to automatically extract generalizable debugging assistance to engineers across a cloud.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::aa1abe23a3b19ad73dbe564f8ddf43d8&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::aa1abe23a3b19ad73dbe564f8ddf43d8&type=result"></script>');
-->
</script>
It is widely held that larger language models, trained on vast quantities of text, excel at generating coherent and fluent text. But at the same time, Small Language Models still struggle to produce meaningful text beyond a few words. The specific scale at which these abilities emerge is still not well-defined. Consequently, the lingering question remains: must a model be large-scale to generate coherent text?In this paper we have have trained a small language model on Tiny Stories, a synthetic dataset of short stories. The objective is to study the small language models in their ability to generate coherent and consistent English text. We have performed a comparative study where we have analyzed the convergence of loss and investigated how adjustments to the number of heads, layers, and embedding size affect the generation of English text in Small Language Models.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::85bb74201d06473f15d81b3dc2de0f9b&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::85bb74201d06473f15d81b3dc2de0f9b&type=result"></script>');
-->
</script>
With the recent advancements in artificial intelligence (AI), researchers are making endeavours towards building an AI that can understand humans, collaborate with humans, and help or guide them to accomplish certain everyday chores. The actualization of such an assistant AI can pose several challenges including planning (on certain events), comprehending human instructions, multimodal understanding, and grounded conversational ability.Imagine a scenario that one wishes to perform a task, such as “making a plate of fried rice”, or “purchasing a suitable sofa bed”, which can require multiple steps of actions and manipulation of certain objects. How would an assistant AI collaborate with humans to accomplish such desired tasks? One crucial aspect of the system is to understand how and when to take a certain action, which is often learned from interpreting and following a guidance, a piece of resource that encompasses knowledge about accomplishing the task and potentially the events that will occur during task completions. The guidance can come from human verbal interactions (e.g., in the form of a conversation or a question) or static written instructional manuals.In the first part of this thesis, I will decompose the proposed system framework into three foundational components: (1) task-step sequencing/planning, where the AI needs to understand the appropriate sequential procedure of performing each sub-task to accomplish the whole task, especially when the task knowledge is learned from instructional resources online that can be many and do not always come consolidated with proper ordering; (2) action-dependencies understanding, where an agent should be able to infer dependencies of performing an action and the outcomes after executing a particular action, in order to examine the situations and adjust the plan of accomplishing tasks; (3) multimodal grounding and active perception, that we equip the AI with the ability to actively ground the visually perceived surroundings to the textual instructions (or verbal interactions) and perform reasoning over multimodal information along the task completions.Combining the two parts, the foundational components as well as the established novel challenging benchmarks, this thesis aims at providing a comprehensive research road map for the research direction of next-generation (multimodal) AI assistants.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::3790e4bec8387af682781d0651655c4c&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::3790e4bec8387af682781d0651655c4c&type=result"></script>');
-->
</script>
With the rapid development of neural models in natural language processing (NLP), large and deep models achieve state-of-the-art across NLP tasks, and are deployed in real-world applications. Models become black-box to our human. Therefore, effective approaches controlling NLP moedls are demanding. Controlling helps model solve particular tasks.For example, when we ask the model to generate a recipe, we have a constraint about what ingredients we want the recipe to contain. In addition, as NLP researchers, we are responsible for preventing models from generating offensive or other unpredictable outputs, otherwise deploying them in real-world applications may cause society issues. To control the NLP models, my research focus on injecting constraints, a set of rules that the model must follow, to control the model behaviour via constrained inference and decoding. My research goal is to develop techniques leveraging different kinds of constraints in various scenarios for structure prediction models and large language models. Generally, constraints represent human knowledge and expectation to the model outputs, and constrained inference is the bridge between human beings and the neural models.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::a2815d6eafbe2dc98550549ccc65c08b&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::a2815d6eafbe2dc98550549ccc65c08b&type=result"></script>');
-->
</script>
This paper delves into the transformative intersection of emerging technologies and digital libraries, illuminating a path toward an enriched and accessible knowledge landscape. Focusing on Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Augmented Reality (AR), and Virtual Reality (VR), the study explores how these technologies redefine digital library experiences. AI and ML algorithms empower intuitive content curation and recommendation, reshaping the way users interact with digital resources. NLP bridges the gap between human language intricacies and digital systems, enhancing search functionalities and making information retrieval seamless. AR overlays digital information onto the physical world, expanding interactive learning possibilities, while VR immerses users in virtual realms, revolutionizing educational paradigms. The paper critically examines the practical integration of these technologies, ensuring digital libraries not only preserve vast knowledge repositories but also present information in engaging and accessible formats. Through AI-driven metadata generation and content tagging, digital libraries are systematically organized and enriched, amplifying search accuracy. These innovations not only preserve the past but also illuminate a future where knowledge is universally accessible, fostering curiosity, learning, and exploration. The study not only theoretically explores the potential of these technologies but also delves into the perceptions of practical library users, ensuring a user-centric approach in shaping the digital libraries of tomorrow. This research contributes significantly to the evolving landscape of digital libraries, paving the way for inclusive, immersive, and engaging knowledge experiences for diverse users worldwide.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10211088&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10211088&type=result"></script>');
-->
</script>
Ever since the explosion of the internet, fake news has always been a cause for concern. The proliferation of fake news online hinders access to reliable information. The efficiency of several machine learning methods for the identification of fake news is investigated in this work. We train and evaluate five models: Support Vector Machine (SVM), Logistic Regression, Random Forest, Long Short-Term Memory (LSTM), and Naive Bayes. Employing two distinct datasets, we evaluate the models' generalizability. We extract textual features from the news articles and assess their performance using established metrics. This investigation sheds light on the advantages and limitations of each model within the context of fake news classification, contributing to the development of more robust detection systems. Furthermore, we explore the impact of utilizing different machine learning paradigms, including supervised learning (Logistic Regression, Random Forest, SVM) and deep learning (LSTM) on the detection accuracy. This comparative analysis provides valuable insights into the optimal approach for tackling the intricate challenge of fake news identification.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3532::5256bb39dbe6c7d8d350724defd5af2a&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3532::5256bb39dbe6c7d8d350724defd5af2a&type=result"></script>');
-->
</script>
In the present era, artificial intelligence (AI) technology is becoming more and more mature and has gradually begun to be used in various fields, so in the field of corporate training, there exists a new conception of Large Language Model (LLM) chatbots, this paper take OTTO chatbot as an example to explore about how LLM chatbots can be applied in corporate training and what changes it may bring, by exploring the chatbot's evolution process and the possible application scenarios of the bot, the importance of LLM and the role it plays in training can be learned. During the research process, this paper explore how to select data for training, how to prepare the data and the training methods of LLM, as well as introduce the techniques and strategies needed to train chatbots with their own anticipation library. The research elaborate on the core principles of session design, thinking about how to make communication more effective, how to enhance the user experience, and how to make sessions more flexible, while in terms of practical experiments, this paper analyzes in depth how OTTO can be applied to corporate training, fine-tuning the BLOOM model using the LLaMA-Factory framework and the SQuAD dataset, and recording in detail the fine-tuning process, the loss value and the learning rate of the changes, while the results after the experiment show that running LLM chatbots can really improve the interactivity and learning effect in corporate training.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3532::c59827ea559669691b628f406db1e64d&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3532::c59827ea559669691b628f406db1e64d&type=result"></script>');
-->
</script>
Tämä opinnäytetyö tutkii, miten kehittyneet tekoälyteknologiat voivat muuttaa ohjelmistoalaa, keskittyen erityisesti työkaluihin, jotka tehostavat koodausta ja joilla on kyky ymmärtää luonnollista kieltä. Opinnäytetyössä selvitetään mm. kuinka tekoälyn tarjoamia mahdollisuuksia käytetään ohjelmistoalalla, mitä haasteita niiden käytössä kohdataan ja minkälaista arvoa ne tuottavat sekä käyttäjille että organisaatioille. Opinnäytetyössä hyödynnetään kyselytutkimusta, johon vastasi 37 ohjelmistoalan ammattilaista. Kyselytutkimuksen kautta analysoidaan ohjelmistoalan ammattilaisten todellisia tekoälyn käyttötapauksia, tunnistetaan käyttöönoton esteitä (esimerkiksi eettiset kysymykset, osaamisvajeet) ja mitataan syntyneitä hyötyjä (arvoa) sekä kehittäjille (esimerkiksi lisääntynyt tuottavuus) että organisaatioille (esim. kilpailuetu). Tutkimustulokset osoittavat, että tekoäly voi merkittävästi lisätä tuottavuutta, parantaa kehittäjäkokemusta, vauhdittaa innovaatioita ja tarjota kilpailuetua. Tutkimus tunnistaa kuitenkin myös mahdollisia haasteita, kuten erilaiset eettiset huolet, osaamistarpeet, tietosuojakysymykset ja integrointihaasteet. Opinnäytetyö sijoittaa nämä käytännön havainnot teoreettiseen viitekehykseen hyödyntämällä olemassa olevia teorioita arvon luomisesta ohjelmistotuotannossa. Tämä analyysi korostaa AI- työkalujen ainutlaatuista vaikutusta arvon luomiseen sekä kehittäjille että heitä työllistäville organisaatioille. Tämä opinnäytetyö edistää käynnissä olevaa keskustelua tekoälyn roolista ohjelmistotuotannossa yhdistämällä teoreettisia näkökulmia ja tekoälyn käytännön sovelluksia. Se valotta tekoälyn monipuolista roolia kehitystyön tehostamisessa ja ihmisen ja tietokoneen välisessä vuorovaikutuksessa. Tämän opinnäytetyö tutkimus luo pohjan tuleville tutkimuksille kehittyvistä AI-työkaluista ja niiden vaikutuksesta ohjelmistotuotannon kehitykseen. This Master's thesis investigates the transformative potential of Artificial Intelligence (AI) integration within the software development industry. Focusing on tools that enhance coding efficiency and have natural language processing (NLP) capabilities, the thesis research explores AI adoption patterns, challenges, and value creation associated with AI. Utilising a survey of 37 software professionals, the thesis analyses real-world application scenarios, identifies barriers to adoption (e.g., ethical considerations, skills gaps), and quantifies benefits for both developers (e.g., increased productivity) and organisations (e.g., competitive advantages). The findings reveal AI's significant role in boosting productivity, developer experience, accelerating software development lifecycles, and creating a competitive edge for businesses. However, the thesis research acknowledges potential hurdles like ethical concerns, the need for specialised skillsets, data privacy concerns, and integration complexities. By drawing on established theories of value creation in the context of software development, the thesis situates these practical findings within a theoretical framework. This analysis highlights the unique impact of AI tools on value creation for both development professionals and the employing organisations. This research contributes to the ongoing dialogue by bridging the gap between theoretical perspectives and real-world applications of AI in software development. It sheds light on the multifaceted role of AI in enhancing development operations and human-computer interaction. Ultimately, this thesis empowers the software development community to make informed decisions about AI adoption and leverage its potential to drive innovation and competitive advantage.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______1319::234a593c5dd2b77f67c4aa97002513ee&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______1319::234a593c5dd2b77f67c4aa97002513ee&type=result"></script>');
-->
</script>
handle: 2299/27393
© 2023 Published by Elsevier B.V. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.datak.2023.102259 Peer reviewed
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=2299/27393&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=2299/27393&type=result"></script>');
-->
</script>
“Undulation of a Rupture” emerges out of the enigmatic task of touching the past in the present via the landscapes of the American South. Driven by a profound yearning to trace, locate and connect with ancestral origins I took to the land to unearth impressions of these leavings. Expressed expansively through the multiple registers of: site, body and materiality these findings offer new articulations of being and relating. The land presents itself as a stage engendering the enmeshment of body, earth and ancestral memory. The body, punctuated through form, texture and movement, is wielded as a medium acting as a bridge between what is known and what is felt at the edges of visibility. The mercurial nature of Blackness arises as a complex nexus, intersecting as both a geographical site and tangible material made manifold photographically.Through these transpositions, I propose new hybrids of Black female subjectivity within a visual framework of abstraction. These renderings offer an ever-evolving exploration and inquiry on the trans-migrational body.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::237f4226b47324fe37228145a8afec5b&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::237f4226b47324fe37228145a8afec5b&type=result"></script>');
-->
</script>
Cloud computing is an integral part of today's world: it primarily enables individuals and enterprises to provision and manage resources such as compute, storage, etc., for their needs with the click of a button. Modular approach to software development enabled cloud providers to rapidly evolve and deliver increasing number of services to users rendering clouds mission-critical. To insure prompt serviceability of this Achilles’ Heel from facing incidents, cloud providers employ significant human resources. However, with the ever increasing number of services offered by clouds and growing types of workloads such as the proliferation of Machine Learning workloads in recent times, it is no longer viable for cloud providers to scale their human resources at this pace to insure prompt serviceability of their clouds.In this dissertation, I present my work towards improving the serviceability of clouds by leveraging insights from my experience with real debugging workflows employed at the three largest clouds today. I present techniques from Machine Learning and Natural Language Processing to leverage the vast amount of historical debugging data in clouds to develop tools that provide assistance to their engineers. I present a 'Coarsening' framework that enables transition towards a centralized debugging plane and discuss practical evaluations of tools built using this framework.I present Revelio, a tool that can generate debugging queries for engineers to execute over system-wide logged data, whose results can likely hint them of the root cause of an incident. To enable benchmarking many techniques, I also built a distributed systems debugging testbed that can inject faults into services, interface with human users and collect execution logs across the system. I present AutoARTS, a tool that can tag a lengthy postmortem report of an incident in the cloud with all root causes from an extensive taxonomy and can also highlight key pieces of information from a postmortem for ease of analysis. I present PerfRCA, a tool that can scale causal discovery to production-scale telemetry to reason performance degradations. I conclude with my vision for a centralized approach to automatically extract generalizable debugging assistance to engineers across a cloud.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::aa1abe23a3b19ad73dbe564f8ddf43d8&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::aa1abe23a3b19ad73dbe564f8ddf43d8&type=result"></script>');
-->
</script>
It is widely held that larger language models, trained on vast quantities of text, excel at generating coherent and fluent text. But at the same time, Small Language Models still struggle to produce meaningful text beyond a few words. The specific scale at which these abilities emerge is still not well-defined. Consequently, the lingering question remains: must a model be large-scale to generate coherent text?In this paper we have have trained a small language model on Tiny Stories, a synthetic dataset of short stories. The objective is to study the small language models in their ability to generate coherent and consistent English text. We have performed a comparative study where we have analyzed the convergence of loss and investigated how adjustments to the number of heads, layers, and embedding size affect the generation of English text in Small Language Models.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::85bb74201d06473f15d81b3dc2de0f9b&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::85bb74201d06473f15d81b3dc2de0f9b&type=result"></script>');
-->
</script>
With the recent advancements in artificial intelligence (AI), researchers are making endeavours towards building an AI that can understand humans, collaborate with humans, and help or guide them to accomplish certain everyday chores. The actualization of such an assistant AI can pose several challenges including planning (on certain events), comprehending human instructions, multimodal understanding, and grounded conversational ability.Imagine a scenario that one wishes to perform a task, such as “making a plate of fried rice”, or “purchasing a suitable sofa bed”, which can require multiple steps of actions and manipulation of certain objects. How would an assistant AI collaborate with humans to accomplish such desired tasks? One crucial aspect of the system is to understand how and when to take a certain action, which is often learned from interpreting and following a guidance, a piece of resource that encompasses knowledge about accomplishing the task and potentially the events that will occur during task completions. The guidance can come from human verbal interactions (e.g., in the form of a conversation or a question) or static written instructional manuals.In the first part of this thesis, I will decompose the proposed system framework into three foundational components: (1) task-step sequencing/planning, where the AI needs to understand the appropriate sequential procedure of performing each sub-task to accomplish the whole task, especially when the task knowledge is learned from instructional resources online that can be many and do not always come consolidated with proper ordering; (2) action-dependencies understanding, where an agent should be able to infer dependencies of performing an action and the outcomes after executing a particular action, in order to examine the situations and adjust the plan of accomplishing tasks; (3) multimodal grounding and active perception, that we equip the AI with the ability to actively ground the visually perceived surroundings to the textual instructions (or verbal interactions) and perform reasoning over multimodal information along the task completions.Combining the two parts, the foundational components as well as the established novel challenging benchmarks, this thesis aims at providing a comprehensive research road map for the research direction of next-generation (multimodal) AI assistants.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::3790e4bec8387af682781d0651655c4c&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::3790e4bec8387af682781d0651655c4c&type=result"></script>');
-->
</script>
With the rapid development of neural models in natural language processing (NLP), large and deep models achieve state-of-the-art across NLP tasks, and are deployed in real-world applications. Models become black-box to our human. Therefore, effective approaches controlling NLP moedls are demanding. Controlling helps model solve particular tasks.For example, when we ask the model to generate a recipe, we have a constraint about what ingredients we want the recipe to contain. In addition, as NLP researchers, we are responsible for preventing models from generating offensive or other unpredictable outputs, otherwise deploying them in real-world applications may cause society issues. To control the NLP models, my research focus on injecting constraints, a set of rules that the model must follow, to control the model behaviour via constrained inference and decoding. My research goal is to develop techniques leveraging different kinds of constraints in various scenarios for structure prediction models and large language models. Generally, constraints represent human knowledge and expectation to the model outputs, and constrained inference is the bridge between human beings and the neural models.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::a2815d6eafbe2dc98550549ccc65c08b&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______325::a2815d6eafbe2dc98550549ccc65c08b&type=result"></script>');
-->
</script>
This paper delves into the transformative intersection of emerging technologies and digital libraries, illuminating a path toward an enriched and accessible knowledge landscape. Focusing on Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Augmented Reality (AR), and Virtual Reality (VR), the study explores how these technologies redefine digital library experiences. AI and ML algorithms empower intuitive content curation and recommendation, reshaping the way users interact with digital resources. NLP bridges the gap between human language intricacies and digital systems, enhancing search functionalities and making information retrieval seamless. AR overlays digital information onto the physical world, expanding interactive learning possibilities, while VR immerses users in virtual realms, revolutionizing educational paradigms. The paper critically examines the practical integration of these technologies, ensuring digital libraries not only preserve vast knowledge repositories but also present information in engaging and accessible formats. Through AI-driven metadata generation and content tagging, digital libraries are systematically organized and enriched, amplifying search accuracy. These innovations not only preserve the past but also illuminate a future where knowledge is universally accessible, fostering curiosity, learning, and exploration. The study not only theoretically explores the potential of these technologies but also delves into the perceptions of practical library users, ensuring a user-centric approach in shaping the digital libraries of tomorrow. This research contributes significantly to the evolving landscape of digital libraries, paving the way for inclusive, immersive, and engaging knowledge experiences for diverse users worldwide.