The healthcare sector relies on Artificial Intelligence (AI) to automate tasks and assist in health care as patient data is dynamic and voluminous. Using Machine Learning methods, healthcare providers attempt to improve service by using algorithms that help provide an individualized treatment plan mitigating risk factors. Palliative Care (PC) is a form of health care provided to patients with life limiting illness. This requires regular monitoring of symptoms, the performance status of the individual and maintaining the Quality of Life (QoL) of the patient and provide support to the caregiver. To maintain the QoL, it is necessary to provide relief from the symptom burden due to the serious illness. Palliative care addresses to provide relief from the symptoms that are commonly observed such as pain, nausea, fatigue, depression, dyspnoea, lack of appetite and sleep. This paper highlights the various contexts in which Machine Learning (ML), Natural Language Processing (NLP) and Multi-Agent Systems (MAS) serve as useful tools in assisting Palliative Care. AI models implementing machine learning and deep learning algorithms have been developed for predicting mortality in PC patients. Using hierarchical clustering of biomarkers and NLP techniques, predictions on the survival curve of patients from the time of visit have been observed. Machine Learning algorithms have also been employed to identify the pauses in clinical conversations and classify them accordingly. PC is a multidisciplinary approach and Multi-Agent Systems have been suggested to analyse patients, manage symptoms and plan proactive care using a multi-layer network of Intelligent software agents.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10427761&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10427761&type=result"></script>');
-->
</script>
In this paper, we argue that the crisis of teaching can be understood as a crisis of labour that continues to impact academic librarians because it is a historical process grounded in larger socio-political shifts precipitated by capitalism. We demonstrate that the emergence and development of teaching—and specifically teaching information literacy (IL) as a kind of librarian curriculum—in academic libraries in North America corresponds to the emergence of neoliberalism. The shocks created by neoliberal fiscal austerity along with anxiety about de-professionalization and de-skilling provoked by cheaper and more widely available information technology created a mounting crisis of legitimacy in librarianship throughout the late 1970s and into the 1980s. Librarians ostensibly remedied this crisis through the positioning of IL as a central contribution of the profession to the academy and society. The COVID-19 pandemic and economic recessions have only intensified the proletarianization processes that have been ongoing since the 1970s. As teaching, learning, and assessment technologies proliferate in the academy, librarians cannot teach more efficiently to meet the needs of growing university populations. Instead, they must rethink the purpose and goals of librarian teaching in the context of the academy. The question of teaching will not be solved until material conditions of librarian labour in the academy are solved.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::6da31c1544dd13a227ea3d6d186500a7&type=result"></script>');
-->
</script>
gold |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::6da31c1544dd13a227ea3d6d186500a7&type=result"></script>');
-->
</script>
Cette étude a examiné le rôle des indices de durée et spectraux dans les contrastes de voyelles de tension et de relâchement de la langue seconde produits par des locuteurs non natifs. Pour tester les affirmations précédentes selon lesquelles les locuteurs s'appuient principalement sur des indices de durée plutôt que sur des indices spectraux pour distinguer les paires de voyelles L2 tendues et laxistes, des données vocales de style citation ont été collectées auprès de 16 locuteurs natifs du bengali ; les participants étaient tous des étudiants de premier cycle. Les données ont été collectées via une tâche d'observation où les participants ont écouté une liste soigneusement construite de mots anglais dans un ordre aléatoire et ont répété chaque mot immédiatement après les avoir entendus. Les énoncés ont été enregistrés via un enregistreur vocal Zoom H1n. Les données vocales collectées ont été annotées et traitées à l'aide du logiciel d'analyse phonétique Praat et de la boîte à outils d'annotation semi-automatique DARLA ; les analyses statistiques ont été effectuées à l'aide du logiciel de calcul statistique R. Les résultats indiquent que les locuteurs du bengali ne mettent pas l'accent sur les indices de durée pour différencier les paires de voyelles anglaises, contrairement aux modèles généraux rapportés dans d'autres langues ; ils préfèrent plutôt les indices spectraux aux indices de durée. Este estudio investigó el papel de las señales duracionales y espectrales en el tiempo del segundo idioma y los contrastes vocales laxos producidos por hablantes no nativos. Para probar las afirmaciones anteriores de que los hablantes se basan principalmente en señales duracionales sobre señales espectrales para distinguir los pares de vocales L2 tensas y laxas, se recopilaron datos del habla de estilo de cita de 16 hablantes nativos de bengalí; todos los participantes eran estudiantes de pregrado. Los datos se recopilaron a través de una tarea de sombreado en la que los participantes escucharon una lista cuidadosamente construida de palabras en inglés en orden aleatorio y repitieron cada palabra inmediatamente después de escucharlas. Las expresiones se grabaron a través de una grabadora de voz Zoom H1n. Los datos del habla recopilados se anotaron y procesaron utilizando el software de análisis fonético Praat y el kit de herramientas de anotación semiautomática DARLA; los análisis estadísticos se realizaron utilizando el software de computación estadística R. Los resultados indican que los hablantes de bengalí no hacen hincapié en las señales duracionales para diferenciar los pares de vocales laxas en inglés, contrariamente a los patrones generales informados en otros idiomas; más bien, prefieren las señales espectrales sobre las señales duracionales. This study investigated the role of durational and spectral cues in second language tense and lax vowel contrasts produced by non-native speakers. To test previous claims that speakers primarily rely on durational cues over spectral cues to distinguish L2 tense and lax vowel pairs, citation style speech data were collected from 16 native speakers of Bangla; participants were all undergraduate students. The data were collected via a shadowing task where participants listened to a carefully constructed list of English words in random order and repeated each word immediately after they heard them. The utterances were recorded via a Zoom H1n voice recorder. Collected speech data were annotated and processed using the phonetic analysis software Praat and the semi-automatic annotation toolkit DARLA; statistical analyses were performed using R statistical computing software. Results indicate that Bangla speakers do not emphasize on durational cues to differentiate English tense-lax vowel pairs, contrary to the general patterns reported from other languages; rather, they prefer the spectral cues over the durational cues. بحثت هذه الدراسة في دور الإشارات الزمنية والطيفية في تباين حروف العلة في اللغة الثانية وتباين حروف العلة المتراخية التي ينتجها المتحدثون غير الأصليين. لاختبار الادعاءات السابقة بأن المتحدثين يعتمدون في المقام الأول على الإشارات الزمنية على الإشارات الطيفية للتمييز بين أزواج حروف العلة المتوترة L2 والمتراخية، تم جمع بيانات الكلام بأسلوب الاقتباس من 16 متحدثًا أصليًا للغة البنغالية ؛ كان المشاركون جميعًا طلابًا جامعيين. تم جمع البيانات عبر مهمة التظليل حيث استمع المشاركون إلى قائمة مبنية بعناية من الكلمات الإنجليزية بترتيب عشوائي وكرروا كل كلمة مباشرة بعد سماعهم لها. تم تسجيل الأقوال عبر مسجل صوت Zoom H1n. تم شرح بيانات الكلام التي تم جمعها ومعالجتها باستخدام برنامج التحليل الصوتي Praat ومجموعة أدوات التعليق التوضيحي شبه التلقائي DARLA ؛ تم إجراء التحليلات الإحصائية باستخدام برنامج الحوسبة الإحصائية R. تشير النتائج إلى أن المتحدثين باللغة البنغالية لا يؤكدون على الإشارات الزمنية للتمييز بين أزواج حروف العلة الإنجليزية المتراخية والمتوترة، على عكس الأنماط العامة المبلغ عنها من اللغات الأخرى ؛ بدلاً من ذلك، يفضلون الإشارات الطيفية على الإشارات الزمنية.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.60692/bns1a-aex77&type=result"></script>');
-->
</script>
gold |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.60692/bns1a-aex77&type=result"></script>');
-->
</script>
Law 8 October 2010 n. 170, containing “New legal norms about Specific Learning Disabilities at school”, introduced obligation of a specific didactic for students with Specific Learning Disabilities (DSA). School teaching of History had to adapt to new legislation, for an increasingly inclusive teaching.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::f165fbebd89df1e9a9c80c48d6776f21&type=result"></script>');
-->
</script>
gold |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::f165fbebd89df1e9a9c80c48d6776f21&type=result"></script>');
-->
</script>
The aim of our study is to discuss the use of artificial intelligence-supported platforms, which have become increasingly popular in recent months, in the context of ethics, opportunities, challenges, and the role of the researcher. In this context, we analysed platforms such as ChatGPT, ChatPDF, Consensus, SciSpace, and Scite Assistant. Within the scope of our analyses, we concluded that various regulations regarding the use of AI-supported platforms in scientific research should be enacted as soon as possible. Although such platforms offer opportunities for researchers, they also bring challenges such as referencing and reproducibility of scientific work. Besides, the use of AI-supported platforms in scientific research also puts the role of researchers into question.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::c1de267860cdc4c0ffe9e81f82de596f&type=result"></script>');
-->
</script>
gold |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::c1de267860cdc4c0ffe9e81f82de596f&type=result"></script>');
-->
</script>
The Ghost in the Machine - AI's Impact on Cultural Heritage (Research) Over the past decade, deep learning methods have made remarkable advancements. This progress can be attributed to various factors such as massive parallelization through the utilization of Graphics Processing Units (GPUs) for massive parallelization. This shift in hardware has significantly accelerated the training of deep neural networks, allowing researchers to tackle increasingly complex problems. Another critical factor contributing to the success of deep learning is the acquisition of vast training datasets sourced from the World Wide Web, which has become a treasure trove of information. As a result, these models have become adept at capturing intricate patterns and representations in various domains. Furthermore, the development of efficient and reusable neural network architectures has also played a crucial role in the advancement of deep learning. Putting everything together, these evolutions have paved the way for the achievement of human-like or even superhuman performance in specific domains. Notably, the emergence of pre-trained large language models has demonstrated the capability to grasp the intricate semantics of natural languages, yielding exceptional outcomes in classification, prediction, and generation tasks. Similarly, in the realm of image generation, models such as Stable Diffusion and Dall-E have showcased their prowess. Tasks that once demanded human expertise for their execution are now on the brink of being supported or entirely taken over by machine intelligence. In the subsequent sections, we will illuminate some recent breakthroughs in AI-assisted search and retrieval systems within the domain of cultural heritage. One such example is the development of a multimodal search system for Iconclass, incorporating vision-language pre-trained machine learning models. However, it is paramount to approach the application of these cutting-edge generative AI models in scientific and research contexts with due diligence. One must remain mindful of potential inaccuracies and hallucinations that these systems can inadvertently produce. It's worth noting that deep learning and large language models constitute only a specific subset of artificial intelligence, falling under the broader category of machine learning. Symbolic knowledge representation represents another distinct subdomain of AI, distinguished by its mathematical rigor and formalism. In this realm, any inaccuracies or inconsistencies in underlying assumptions can be readily identified and rectified. Knowledge graphs built upon ontologies present a viable avenue for enhancing the explainability of black-box statistical deep learning systems. Furthermore, they possess the capacity to flag false or counterfeit information. As a result, future information systems are poised to embrace hybrid solutions that amalgamate symbolic and subsymbolic AI approaches to combine the strengths of both paradigms, offering not only reliable but also trustworthy results.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10244627&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10244627&type=result"></script>');
-->
</script>
Le recrutement a toujours été une tâche cruciale pour la réussite des entreprises, notamment pour les entreprises de services pour lesquelles l’embauche est un élément central de leur modèle commercial. La croissance du marché du travail ainsi que l’augmentation du nombre de compétences spécialisées requises par les entreprises ont motivé l’exploration de techniques pour optimiser et même automatiser certaines parties du processus de recrutement. Les nombreux progrès réalisés dans les domaines de l’intelligence artificielle et du traitement automatique du langage naturel au cours des dernières décennies ont offert la possibilité de traiter efficacement les données utilisées lors du recrutement. Nous examinons l’utilisation d’un système de recommandation d’emploi dans une entreprise de conseil, en mettant l’accent sur l’explication de la recommandation et sa perception par les utilisateurs. Tout d’abord, nous expérimentons avec des recommandations basées sur la connaissance en utilisant l’ontologie européenne des compétences et des professions ESCO qui présente des résultats prometteurs, mais en raison des limites actuelles, nous utilisons finalement un système de recommandation sémantique qui fait désormais partie des processus de l’entreprise et offre la possibilité d’études qualitatives et quantitatives sur l’impact des recommandations et de leurs explications. Nous relions la disponibilité des explications à des gains majeurs d’efficacité pour les recruteurs. L’explication offre également un moyen précieux d’affiner les recommandations grâce à des retours utilisateurs contextuels. Un tel retour d’information est non seulement utile pour générer des recommandations en temps réel, mais aussi pour fournir des données précieuses pour évaluer les modèles et améliorer davantage le système. À l’avenir, nous préconisons que la disponibilité des recommandations devienne la norme pour tous les systèmes de recommandation d’emploi. Recruitment has always been a crucial task for the success of companies, and especially consulting companies for which hiring is a centerpiece of their business model. The growth of the labor market along the increasing number of specialized skills that are required by companies has motivated the exploration of techniques to optimize and even automate parts of the recruitment process. The numerous progress made in the fields of Artificial Intelligence and Natural Language Processing during the past few decades offered the opportunity to efficiently process the data used during the recruitment. We examine the use of a job recommender system in a consulting company, with a focus on the explanation of the recommendation and its perception by users. First, we experiment with knowledge-based recommendations using the European ontology of skills and occupation ESCO which showcases promising results, but because of current limitations, we finally use a semantic-based recommender system that has since become part of the company processes and offers the opportunity for qualitative and quantitative studies on the impact of the recommendations and its explanations. We link the explanation availability to major gains in efficiency for recruiters. It also offers them a valuable way to fine-tune recommendations through contextual feedback. Such a feedback is not only useful for generating recommendations at run-time, but also for providing valuable data to evaluate models and further improve the system. Going forward we advocate that the availability of recommendations should be the standard for every job recommender systems.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______212::99bec440c0fe7d8a119c8d377afa27d7&type=result"></script>');
-->
</script>
Green |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______212::99bec440c0fe7d8a119c8d377afa27d7&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3891::32804f673586c15bb8e40c335d2d52a0&type=result"></script>');
-->
</script>
Green |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od______3891::32804f673586c15bb8e40c335d2d52a0&type=result"></script>');
-->
</script>
L’objectif de cette thèse est de construire et de former des modèles d’apprentissage automatique combinant la puissance du traitement du langage naturel avec la compréhension visuelle, permettant une compréhension complète et détaillée du contenu des vidéos. Premièrement, nous proposons deux nouvelles méthodes pour développer des modèles de réponses aux questions sur des vidéos sans avoir recours à une annotation manuelle coûteuse. Nous générons automatiquement des données de réponses aux questions sur des vidéos à partir de vidéos commentées à l’aide de modèles de génération de questions utilisant uniquement du texte. Nous montrons ensuite qu’un transformateur multi-modal entraîné de manière contrastée sur les données générées peut répondre aux questions visuelles sans entraînement supplémentaire. Afin de contourner la procédure de génération de données, nous présentons une approche alternative, nommée FrozenBiLM, qui exploite directement des modèles de langage masqué bidirectionnels. Deuxièmement, nous développons TubeDETR, un modèle de transformateur capable de localiser spatialement et temporellement une requête en langage naturel dans une vidéo non découpée. Contrairement aux approches spatio-temporelles antérieures, TubeDETR peut être efficacement entraîné de bout en bout sur des vidéos non rognées. Troisièmement, nous présentons un nouveau modèle et un nouvel ensemble de données pour la compréhension de multiple évènements dans les vidéos non découpées. Nous introduisons le modèle Vid2Seq qui génère des descriptions denses en langage naturel et les limites temporelles correspondantes pour tous les événements dans une vidéo non découpée en prédisant une seule séquence de jetons. De plus, Vid2Seq peut être efficacement pré-entraîné sur des vidéos commentées à grande échelle en utilisant les transcriptions de paroles comme pseudo-supervision. Enfin, nous présentons VidChapters-7M, un ensemble de données à grande échelle de vidéos chapitrées par les utilisateurs. Sur la base de cet ensemble de données, nous évaluons des modèles de pointe sur trois tâches, dont la génération de chapitres vidéo. Nous montrons également que les modèles de génération de chapitres vidéo se transfèrent bien au sous-titrage vidéo dense. The goal of this thesis is to build and train machine learning models that combine the power of natural language processing with visual understanding, enabling a comprehensive and detailed comprehension of the content within videos. First, we propose two scalable approaches to develop video question answering models without the need for costly manual annotation. We automatically generate video question answering data from narrated videos using text-only question-generation models. We then show that a multi-modal transformer trained contrastively on the generated data can answer visual questions in a zero-shot manner. In order to bypass the data generation procedure, we present an alternative approach, dubbed FrozenBiLM, that directly leverages bidirectional masked language models. Second, we develop TubeDETR, a transformer model that can spatially and temporally localize a natural language query in an untrimmed video. Unlike prior spatio-temporal grounding approaches, TubeDETR can be effectively trained end-to-end on untrimmed videos. Third, we present a new model and a new dataset for multi-event understanding in untrimmed videos. We introduce the Vid2Seq model which generates dense natural language descriptions and corresponding temporal boundaries for all events in an untrimmed video by predicting a single sequence of tokens. Moreover, Vid2Seq can be effectively pretrained on narrated videos at scale using transcribed speech as pseudo-supervision. Finally, we introduce VidChapters-7M, a large-scale dataset of user-chaptered videos. Based on this dataset, we evaluate state-of-the-art models on three tasks including video chapter generation. We also show that video chapter generation models transfer well to dense video captioning in both zero-shot and finetuning settings.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______212::d91bac010bfbffee94ddadd4da149008&type=result"></script>');
-->
</script>
Green |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______212::d91bac010bfbffee94ddadd4da149008&type=result"></script>');
-->
</script>
This paper delves into the transformative intersection of emerging technologies and digital libraries, illuminating a path toward an enriched and accessible knowledge landscape. Focusing on Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Augmented Reality (AR), and Virtual Reality (VR), the study explores how these technologies redefine digital library experiences. AI and ML algorithms empower intuitive content curation and recommendation, reshaping the way users interact with digital resources. NLP bridges the gap between human language intricacies and digital systems, enhancing search functionalities and making information retrieval seamless. AR overlays digital information onto the physical world, expanding interactive learning possibilities, while VR immerses users in virtual realms, revolutionizing educational paradigms. The paper critically examines the practical integration of these technologies, ensuring digital libraries not only preserve vast knowledge repositories but also present information in engaging and accessible formats. Through AI-driven metadata generation and content tagging, digital libraries are systematically organized and enriched, amplifying search accuracy. These innovations not only preserve the past but also illuminate a future where knowledge is universally accessible, fostering curiosity, learning, and exploration. The study not only theoretically explores the potential of these technologies but also delves into the perceptions of practical library users, ensuring a user-centric approach in shaping the digital libraries of tomorrow. This research contributes significantly to the evolving landscape of digital libraries, paving the way for inclusive, immersive, and engaging knowledge experiences for diverse users worldwide.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10211088&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10211088&type=result"></script>');
-->
</script>
The healthcare sector relies on Artificial Intelligence (AI) to automate tasks and assist in health care as patient data is dynamic and voluminous. Using Machine Learning methods, healthcare providers attempt to improve service by using algorithms that help provide an individualized treatment plan mitigating risk factors. Palliative Care (PC) is a form of health care provided to patients with life limiting illness. This requires regular monitoring of symptoms, the performance status of the individual and maintaining the Quality of Life (QoL) of the patient and provide support to the caregiver. To maintain the QoL, it is necessary to provide relief from the symptom burden due to the serious illness. Palliative care addresses to provide relief from the symptoms that are commonly observed such as pain, nausea, fatigue, depression, dyspnoea, lack of appetite and sleep. This paper highlights the various contexts in which Machine Learning (ML), Natural Language Processing (NLP) and Multi-Agent Systems (MAS) serve as useful tools in assisting Palliative Care. AI models implementing machine learning and deep learning algorithms have been developed for predicting mortality in PC patients. Using hierarchical clustering of biomarkers and NLP techniques, predictions on the survival curve of patients from the time of visit have been observed. Machine Learning algorithms have also been employed to identify the pauses in clinical conversations and classify them accordingly. PC is a multidisciplinary approach and Multi-Agent Systems have been suggested to analyse patients, manage symptoms and plan proactive care using a multi-layer network of Intelligent software agents.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10427761&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.5281/zenodo.10427761&type=result"></script>');
-->
</script>
In this paper, we argue that the crisis of teaching can be understood as a crisis of labour that continues to impact academic librarians because it is a historical process grounded in larger socio-political shifts precipitated by capitalism. We demonstrate that the emergence and development of teaching—and specifically teaching information literacy (IL) as a kind of librarian curriculum—in academic libraries in North America corresponds to the emergence of neoliberalism. The shocks created by neoliberal fiscal austerity along with anxiety about de-professionalization and de-skilling provoked by cheaper and more widely available information technology created a mounting crisis of legitimacy in librarianship throughout the late 1970s and into the 1980s. Librarians ostensibly remedied this crisis through the positioning of IL as a central contribution of the profession to the academy and society. The COVID-19 pandemic and economic recessions have only intensified the proletarianization processes that have been ongoing since the 1970s. As teaching, learning, and assessment technologies proliferate in the academy, librarians cannot teach more efficiently to meet the needs of growing university populations. Instead, they must rethink the purpose and goals of librarian teaching in the context of the academy. The question of teaching will not be solved until material conditions of librarian labour in the academy are solved.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::6da31c1544dd13a227ea3d6d186500a7&type=result"></script>');
-->
</script>
gold |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::6da31c1544dd13a227ea3d6d186500a7&type=result"></script>');
-->
</script>
Cette étude a examiné le rôle des indices de durée et spectraux dans les contrastes de voyelles de tension et de relâchement de la langue seconde produits par des locuteurs non natifs. Pour tester les affirmations précédentes selon lesquelles les locuteurs s'appuient principalement sur des indices de durée plutôt que sur des indices spectraux pour distinguer les paires de voyelles L2 tendues et laxistes, des données vocales de style citation ont été collectées auprès de 16 locuteurs natifs du bengali ; les participants étaient tous des étudiants de premier cycle. Les données ont été collectées via une tâche d'observation où les participants ont écouté une liste soigneusement construite de mots anglais dans un ordre aléatoire et ont répété chaque mot immédiatement après les avoir entendus. Les énoncés ont été enregistrés via un enregistreur vocal Zoom H1n. Les données vocales collectées ont été annotées et traitées à l'aide du logiciel d'analyse phonétique Praat et de la boîte à outils d'annotation semi-automatique DARLA ; les analyses statistiques ont été effectuées à l'aide du logiciel de calcul statistique R. Les résultats indiquent que les locuteurs du bengali ne mettent pas l'accent sur les indices de durée pour différencier les paires de voyelles anglaises, contrairement aux modèles généraux rapportés dans d'autres langues ; ils préfèrent plutôt les indices spectraux aux indices de durée. Este estudio investigó el papel de las señales duracionales y espectrales en el tiempo del segundo idioma y los contrastes vocales laxos producidos por hablantes no nativos. Para probar las afirmaciones anteriores de que los hablantes se basan principalmente en señales duracionales sobre señales espectrales para distinguir los pares de vocales L2 tensas y laxas, se recopilaron datos del habla de estilo de cita de 16 hablantes nativos de bengalí; todos los participantes eran estudiantes de pregrado. Los datos se recopilaron a través de una tarea de sombreado en la que los participantes escucharon una lista cuidadosamente construida de palabras en inglés en orden aleatorio y repitieron cada palabra inmediatamente después de escucharlas. Las expresiones se grabaron a través de una grabadora de voz Zoom H1n. Los datos del habla recopilados se anotaron y procesaron utilizando el software de análisis fonético Praat y el kit de herramientas de anotación semiautomática DARLA; los análisis estadísticos se realizaron utilizando el software de computación estadística R. Los resultados indican que los hablantes de bengalí no hacen hincapié en las señales duracionales para diferenciar los pares de vocales laxas en inglés, contrariamente a los patrones generales informados en otros idiomas; más bien, prefieren las señales espectrales sobre las señales duracionales. This study investigated the role of durational and spectral cues in second language tense and lax vowel contrasts produced by non-native speakers. To test previous claims that speakers primarily rely on durational cues over spectral cues to distinguish L2 tense and lax vowel pairs, citation style speech data were collected from 16 native speakers of Bangla; participants were all undergraduate students. The data were collected via a shadowing task where participants listened to a carefully constructed list of English words in random order and repeated each word immediately after they heard them. The utterances were recorded via a Zoom H1n voice recorder. Collected speech data were annotated and processed using the phonetic analysis software Praat and the semi-automatic annotation toolkit DARLA; statistical analyses were performed using R statistical computing software. Results indicate that Bangla speakers do not emphasize on durational cues to differentiate English tense-lax vowel pairs, contrary to the general patterns reported from other languages; rather, they prefer the spectral cues over the durational cues. بحثت هذه الدراسة في دور الإشارات الزمنية والطيفية في تباين حروف العلة في اللغة الثانية وتباين حروف العلة المتراخية التي ينتجها المتحدثون غير الأصليين. لاختبار الادعاءات السابقة بأن المتحدثين يعتمدون في المقام الأول على الإشارات الزمنية على الإشارات الطيفية للتمييز بين أزواج حروف العلة المتوترة L2 والمتراخية، تم جمع بيانات الكلام بأسلوب الاقتباس من 16 متحدثًا أصليًا للغة البنغالية ؛ كان المشاركون جميعًا طلابًا جامعيين. تم جمع البيانات عبر مهمة التظليل حيث استمع المشاركون إلى قائمة مبنية بعناية من الكلمات الإنجليزية بترتيب عشوائي وكرروا كل كلمة مباشرة بعد سماعهم لها. تم تسجيل الأقوال عبر مسجل صوت Zoom H1n. تم شرح بيانات الكلام التي تم جمعها ومعالجتها باستخدام برنامج التحليل الصوتي Praat ومجموعة أدوات التعليق التوضيحي شبه التلقائي DARLA ؛ تم إجراء التحليلات الإحصائية باستخدام برنامج الحوسبة الإحصائية R. تشير النتائج إلى أن المتحدثين باللغة البنغالية لا يؤكدون على الإشارات الزمنية للتمييز بين أزواج حروف العلة الإنجليزية المتراخية والمتوترة، على عكس الأنماط العامة المبلغ عنها من اللغات الأخرى ؛ بدلاً من ذلك، يفضلون الإشارات الطيفية على الإشارات الزمنية.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.60692/bns1a-aex77&type=result"></script>');
-->
</script>
gold |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=10.60692/bns1a-aex77&type=result"></script>');
-->
</script>
Law 8 October 2010 n. 170, containing “New legal norms about Specific Learning Disabilities at school”, introduced obligation of a specific didactic for students with Specific Learning Disabilities (DSA). School teaching of History had to adapt to new legislation, for an increasingly inclusive teaching.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::f165fbebd89df1e9a9c80c48d6776f21&type=result"></script>');
-->
</script>
gold |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::f165fbebd89df1e9a9c80c48d6776f21&type=result"></script>');
-->
</script>
The aim of our study is to discuss the use of artificial intelligence-supported platforms, which have become increasingly popular in recent months, in the context of ethics, opportunities, challenges, and the role of the researcher. In this context, we analysed platforms such as ChatGPT, ChatPDF, Consensus, SciSpace, and Scite Assistant. Within the scope of our analyses, we concluded that various regulations regarding the use of AI-supported platforms in scientific research should be enacted as soon as possible. Although such platforms offer opportunities for researchers, they also bring challenges such as referencing and reproducibility of scientific work. Besides, the use of AI-supported platforms in scientific research also puts the role of researchers into question.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::c1de267860cdc4c0ffe9e81f82de596f&type=result"></script>');
-->
</script>
gold |
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=doajarticles::c1de267860cdc4c0ffe9e81f82de596f&type=result"></script>');
-->
</script>
The Ghost in the Machine - AI's Impact on Cultural Heritage (Research) Over the past decade, deep learning methods have made remarkable advancements. This progress can be attributed to various factors such as massive parallelization through the utilization of Graphics Processing Units (GPUs) for massive parallelization. This shift in hardware has significantly accelerated the training of deep neural networks, allowing researchers to tackle increasingly complex problems. Another critical factor contributing to the success of deep learning is the acquisition of vast training datasets sourced from the World Wide Web, which has become a treasure trove of information. As a result, these models have become adept at capturing intricate patterns and representations in various domains. Furthermore, the development of efficient and reusable neural network architectures has also played a crucial role in the advancement of deep learning. Putting everything together, these evolutions have paved the way for the achievement of human-like or even superhuman performance in specific domains. Notably, the emergence of pre-trained large language models has demonstrated the capability to grasp the intricate semantics of natural languages, yielding exceptional outcomes in classification, prediction, and generation tasks. Similarly, in the realm of image generation, models such as Stable Diffusion and Dall-E have showcased their prowess. Tasks that once demanded human expertise for their execution are now on the brink of being supported or entirely taken over by machine intelligence. In the subsequent sections, we will illuminate some recent breakthroughs in AI-assisted search and retrieval systems within the domain of cultural heritage. One such example is the development of a multimodal search system for Iconclass, incorporating vision-language pre-trained machine learning models. However, it is paramount to approach the application of these cutting-edge generative AI models in scientific and research contexts with due diligence. One must remain mindful of potential inaccuracies and hallucinations that these systems can inadvertently produce. It's worth noting that deep learning and large language models constitute only a specific subset of artificial intelligence, falling under the broader category of machine learning. Symbolic knowledge representation represents another distinct subdomain of AI, distinguished by its mathematical rigor and formalism. In this realm, any inaccuracies or inconsistencies in underlying assumptions can be readily identified and rectified. Knowledge graphs built upon ontologies present a viable avenue for enhancing the explainability of black-box statistical deep learning systems. Furthermore, they possess the capacity to flag false or counterfeit information. As a result, future information systems are poised to embrace hybrid solutions that amalgamate symbolic and subsymbolic AI approaches to combine the strengths of both paradigms, offering not only reliable but also trustworthy results.