
The HUBBLE project proposes the creation of an observatory for the construction and sharing of analysis processes of traces of massive on-line courses. Our observatory will give the actors of such courses (e.g. teachers, researchers, designers, students, administrators and policy makers) the possibility to analyze and explain the teaching and learning phenomena occurring within e-learning environments. The analysis processes we build will therefore accompany actors’ decision making, but will also guide researchers and other actors in the elaboration of analytical constructs, models and indicators. The HUBBLE project will be based on a platform that allows for the construction, sharing and management of analysis processes concerning the traces that are available in the observatory. In such a context, we consider it crucial to address the ethical concerns of stocking and analyzing traces.
ANTIMOINE is a project focused on the tools needed to conduct an anthropological analysis of territories from cultural heritage data. On the application point of view areas are education, tourism, land, territories project management ... From a scientific point of view the aim of the project is the introduction of meaning in legacy information systems so as to provide, in response to a user inquiry, a favorable environment cultural heritage interpretation. This context consists of a set of cultural heritage objects linked by associations with a semantic feature heritage. This context, not defined a priori is obtained in the course of a database heritage. To achieve this goal ANTIMOINE adopts an interdisciplinary approach involving linguistics, data mining and virtual reality. If each partner brings its own lock, fifth lock, transverse emerges: the cooperation between processes specific to each discipline. Cooperation allowing on one hand the introduction of the second direction to reduce the complexity. To solve these locks we rely on three theories: Possible Argmentatifs Semantics (linguistics), the analysis of frequent patterns (data mining) and enaction (virtual reality). This work will be illustrated and evaluated through a prototype operating a real database. This prototype will be developed incrementally and will allow the implementation of three scenarios: education, tourism and museum .. With each scenario a different hardware platform: desktop, digital tablet, immersive device. To get to overcome these obstacles ANTIMOINE relies on a consortium of four partners: the laboratory Lab-STICC - ENIB (Virtual Reality), LINA-COD (data mining) and CoDIRe (linguistic) and society Topic-Topos (database administration and software integration). The work to be performed is divided into five tasks: Task 1 : coordination Task 2 : semantic analysis of cultural heritage . analysis of the semantics of inheritance. It is to propose and use the tools from texts and other data used to create semantic models of heritage, ie The semantic features (concepts) and associations between these rules. Task 3 : data mining . It is to study and develop tools to discover groups of cultural heritage objects and associations between these groups not previously explained and exploited to amend the semantic models. Task 4 : enactive interface .It is to propose a method of natural and immersive interaction can self-organize in order to provide a context for cultural heritage interpretation in line with the questioning of the user. Task 5 : integration . This task has several objectives: (1) ensure the exchange of data between different modules and their synchronization, (2) ensure the integration of software modules developed by the partners and (3) ensure the system's portability between different platforms material forms
Imagine designing and deploying a distributed application on millions of machines simply by posting a link on Twitter or by buying a word on Google Adwords. Imagine doing this without relying on a cloud or a central authority, just through a decentralized execution environment composed of users’ browsers that autonomously manages issues such as communication, naming, heterogeneity, and scalability. The introduction of browser-to-browser communication with WebRTC's Datachannel has made these scenarios closer, but today only experts can afford to tackle the technical challenges associated with large-scale browser-based deployments such as decentralized instant-messaging (Firechat) and Infrastructure-less Mission Critical Push To Talk. OBrowser aims to solve these challenges by means of a novel programming framework.
Along with the democratization of increasingly high-performance digital and communication technologies, higher education and training for adults are constantly challenged by both the renewal and the adaptation of teaching practices. While the frontiers between guided learning and self-learning are becoming less obvious, which tends to redefine the role of the teacher and the learner, the great accessibility of technologies, on the other hand, enables a diversity of interaction modes between teachers and learners, as well as between learners and learners. We believe that the widespread use of digital technologies, especially online courses starts with the development of SPOC (Small Private Online Courses) at a reduced cost while capable of largely covering numerous educational areas. For that matter, the engineering process needs to better involve teachers in charge of the lectures, and to allow them to personalize their content and teaching methods in order to develop blended learning, thus the entanglement of the use of digital content and classroom teaching. PASTEL is a research project that aims to explore the potential of real time and automatic transcriptions for the instrumentation of mixed educational situations where the modalities of the interactions can be face-to-face or online, synchronous or asynchronous. The speech recognition technology approaches a maturity level that allows new opportunities of instrumentation in pedagogical practices and their new uses. More specifically, we develop (1) a real-time transcription application, and (2) educational outreach applications based on the transcription system outputs. We will use these results to automatically generate the materials of a basic SPOC. A set of editing features will be implemented for the mentioned applications that will allow the teacher to adapt and customize these contents according to their needs. Then, the developed applications will be made available to public institutions for higher education and research, and will also be transferred to the industry through Orange or start-ups associated to the research laboratories involved in the project. The major innovations of PASTEL cover the discourse structure from automatic transcriptions that are linked to its educational objectives. The innovation also features the challenging flow processing in real time, which is required when the discourse structure is being used in a face-to-face situation. The project also brings innovative solutions in terms of instrumentation, and diversification of pedagogical practices, as well as a new approach to design and structure online educational contents, based on the use of speech recognition technology.
Model-checking and formal modelling are now techniques with a certain academic recognition, but their applicability in practice remain somewhat inferior to expectations. This is in part due to two main problems: rather rigid modelling of the systems impairing abstraction and scalability, and insufficient feedback from the verification process. In this project, we address these issues by lifting these techniques to the more flexible and rich setting of parametrised formal models. In that setting, some features of the system like the number of processes, the size of some buffers, communication delays, deadlines, energy consumption, and so on, may be not numerical constants, but rather unknown parameters. The model-checking questions then become more interesting: is some property true for all values of the parameters? Or does there exist some value such that it is? Or even what are all the possible values such that it is? Building on the skills of the consortium on topics like regular model-checking, timed systems, probabilistic systems, and our previous contributions in model-checking of systems with a parametrised number of processes and parametrised timed systems, including the development of software tool prototypes, we develop in this project new models, techniques, and tools to extend the applicability of parameters in formal methods. To achieve this objective, we study parameters in the context of discrete and timed/hybrid systems, both of them possibly augmented with quantitative information relating to costs (e.g. energy consumption), and probabilities. This gives the following six tasks: 1. Discrete parameters 2. Timing parameters 3. Discrete and timing parameters 4. Parameters in cost-based models 5. Parameters in discrete models with probabilities 6. Parameters in timed models with probabilities Parametrised models are of obvious interest but the associated theoretical problems are hard. For instance, in the model of parametric timed automata, the basic model for timed systems with time parameters, the mere existence of a parameter value such that the system can reach some given state, is generally undecidable, even with only one parameter. As a consequence, in all these tasks, we follow a common methodology, acknowledging these difficulties, and consisting in formalising the problem, studying decidable subclasses and designing efficient algorithms for the the parametrised model-checking problems (including in particular parameter synthesis), building efficient semi-algorithms for the general class that behave well in realistic cases, and finally implement the techniques in tool prototypes. This raises many challenging and original problems like extending regular model-checking to graphs to model parametrised systems with an arbitrary topology, using infinite-state automata to represent sets of configurations, finding useful decidable classes of parametrised timed/hybrid systems or properties, provide techniques for approximate synthesis of parameter values, study models with parametrised costs, study probabilistic parametric models, and extend statistical verification techniques to parametric systems. We aim at producing high-quality scientific results, published in the recognized venues of the formal methods and model-checking communities, but also at producing software tool prototypes to make these results available in practice, both for the research community and for higher education. Finally, we want to promote the field of parametrised model-checking through the organisation of a yearly open workshop, as a scope-extended version of the SynCoP workshop organised in 2014. Beyond the classical application fields of formal methods (e.g. embedded systems), we envision new applications domains like smart homes, where parameters may account for the specifics of the residents. In that setting, cost-based parametrised models are particularly interesting for a focus on optimising energy consumption.