Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
4 Research products, page 1 of 1

  • Other research products
  • CN
  • Sussex Research Online

Relevance
arrow_drop_down
  • Other research product . Other ORP type . 2015
    Open Access English
    Authors: 
    Chen, Xingyuan; Xia, Yunqing; Jin, Peng; Carroll, John;
    Publisher: Association for the Advancement of Artificial Intelligence Press
    Country: United Kingdom

    Manually labeling documents for training a text classifier is expensive and time-consuming. Moreover, a classifier trained on labeled documents may suffer from overfitting and adaptability problems. Dataless text classification (DLTC) has been proposed as a solution to these problems, since it does not require labeled documents. Previous research in DLTC has used explicit semantic analysis of Wikipedia content to measure semantic distance between documents, which is in turn used to classify test documents based on nearest neighbours. The semantic-based DLTC method has a major drawback in that it relies on a large-scale, finely-compiled semantic knowledge base, which is difficult to obtain in many scenarios. In this paper we propose a novel kind of model, descriptive LDA (DescLDA), which performs DLTC with only category description words and unlabeled documents. In DescLDA, the LDA model is assembled with a describing device to infer Dirichlet priors from prior descriptive documents created with category description words. The Dirichlet priors are then used by LDA to induce category-aware latent topics from unlabeled documents. Experimental results with the 20Newsgroups and RCV1 datasets show that: (1) our DLTC method is more effective than the semantic-based DLTC baseline method; and (2) the accuracy of our DLTC method is very close to state-of-the-art supervised text classification methods. As neither external knowledge resources nor labeled documents are required, our DLTC method is applicable to a wider range of scenarios.

  • Open Access English
    Authors: 
    Pressas, Andreas; Sheng, Zhengguo; Ali, Falah; Tian, Daxin; Nekovee, Maziar;
    Publisher: Institute of Electrical and Electronics Engineers
    Country: United Kingdom

    Vehicle-to-Vehicle Communication (V2V) is an upcoming technology that can enable safer, more efficient transportation via wireless connectivity among moving cars. The key enabling technology, specifying the physical and medium access control (MAC) layers of the V2V stack is IEEE 802.11p, which belongs in the IEEE 802.11 family of protocols originally designed for use in WLANs. V2V networks are formed on an ad hoc basis from vehicular stations that rely on the delivery of broadcast transmissions for their envisioned services and applications. Broadcast is inherently more sensitive to channel contention than unicast due to the MAC protocol’s inability to adapt to increased network traffic and colliding packets never being detected or recovered. This paper addresses this inherent scalability problem of the IEEE 802.11p MAC protocol. The density of the network can range from being very sparse to hundreds of stations contenting for access to the channel. A suitable MAC needs to offer the capacity for V2V exchanges even in such dense topologies which will be common in urban networks. We present a modified version of the IEEE 802.11p MAC based on Reinforcement Learning (RL), aiming to reduce the packet collision probability and bandwidth wastage. Implementation details regarding both the learning algorithm tuning and the networking side are provided. We also present simulation results regarding achieved message packet delivery and possible delay overhead of this solution. Our solution shows up to 70% increase in throughput compared to the standard IEEE 802.11p as the network traffic increases, while maintaining the transmission latency within the acceptable levels.

  • Other research product . Other ORP type . 2016
    Open Access English
    Authors: 
    Lai, Yuhui; Wang, Chen; Li, Yanan; Ge, Shuzhi Sam; Huang, Deqing;
    Country: United Kingdom

    In this paper, a pointing gesture recognition method is proposed for human-robot interaction. The pointing direction of the human partner is obtained by extracting the joint coordinates and computing through vector calculations. 3D to 2D mapping is implemented to build a top-view 2D map with respect to the actual ground circumstance. Using this method, robot is able to interpret the human partner’s 3D pointing gesture based on the coordinate information of his/her shoulder and hand. Besides this, speed control of robot can be achieved by adjusting the position of the human partner’s hand relative to the head. The recognition performance and viability of the system are tested through quantitative experiments.

  • Open Access English
    Authors: 
    Sheng, Zhengguo; Özpolat, Mumin; Tian, Daxin; Leung, Victor; Nekovee, Maziar;
    Publisher: Institute of Electrical and Electronics Engineers
    Country: United Kingdom
    Project: UKRI | Doing More with Less Wiri... (EP/P025862/1)

    The increasing complexity of automotive electronics has put considerable pressure on automotive communication networking to accommodate in-vehicle information flows. The use of power lines has been a promising alternative to in-vehicle communications because of elimination of extra data cables. In this paper, we focus on the latest HomePlug Green PHY (HPGP) which has been promoted by major automotive manufacturers for green communications with electric vehicles, and study its worst-case access delay performance in supporting delaycritical in-vehicle applications using both theoretical analysis and the simulation. Specifically, we apply Network Calculus as a deterministic modeling approach to evaluate the worst delay and further verify its performance using the OMNeT++ simulation. Evaluation results are also supplemented to compare with legacy methods and provide useful guidelines for developing HPGP based vehicular power line communication systems.

Send a message
How can we help?
We usually respond in a few hours.