Prof. Rais received his MS and PhD degrees in Computer Engineering with specialization in Networks and Distributed Systems from University of Nice, Sophia Antipolis, France in 2007 and 2011 respectively. Before that, he got BE in Computer Systems from National University of Science and Technology, Pakistan in 2002. He has experience of more than 20 years in teaching, research and industry R&D. Prof. Rais is author of several of publications in internationally recognized peer-reviewed journals and conferences. His research interests include Network Protocols and Architectures, Information-Centric and Software-Defined Networks, Machine Learning Algorithms, Cloud Computing, Network Virtualization, and Internet Naming & Addressing issues. The research profiles of Prof. Rais can also be accessed at the following links: Rao Naveed Bin Rais on Google Scholar Rao Naveed Bin Rais on ResearchGate Rao Naveed Bin Rais on Scopus Rao Naveed Bin Rais on ORCID [NOTE]: In the publication list below, refer to the following: [ J*] ==> Journal Paper [C*] ==> Conference Paper [B*] ==> Book [BC*] ==> Book Chapter [PP*] ==> Preprint [R*] ==> Research Report
Recommender systems (RSs) play a pivotal role in mitigating information overload by aiding individuals or groups in discovering relevant and personalized information. An individual’s food preferences may vary when dining with friends compared to dining with family. Most of the existing group RSs generally assume users to be associated with a single group. However, in real-world scenarios, a user can be part of multiple groups due to overlapping/diverse preferences. This raises several challenges for traditional RSs due to the inherent complexity of group memberships, degrading the effectiveness and accuracy of the recommendations. Computing user to group membership degrees is a complex task, and conventional methods often fall short in accurately capturing the varied preferences of individuals. To address these challenges, we propose an integrated two-stage group recommendation (ITGR) framework that considers users’ simultaneous memberships in multiple groups with conflicting preferences. We employ fuzzy C-means clustering along with collaborative filtering to provide a more flexible and precise approach to membership assignment. Group formation is carried out using similarity thresholds followed by deep neural collaborative filtering (DNCF) to generate the top-k items for each group. Experiments are conducted using a large-scale recipes’ dataset, and the results demonstrate that the proposed model outperforms traditional approaches in terms of group satisfaction, normalized discounted cumulative gain (NDCG), precision, recall, and F1-measure.
The integration of private mobile networks (PMN) driven by next generation train networks (NGTNs) require a paradigm for a secure spectrum sharing with mobile network operators (MNOs) to provide privacy driven architecture. For such unprecedented performance expectations, a fine-grained spectrum sharing PMN is required to address secure communication with volume, time, and usage area parameters. In this article, we study a novel vision for empowering secure spectrum sharing for NGTNs with MNOs using blockchain technology by using smart contracts. In particular, blockchain key features and its integration along with the anticipated challenges and potential benefits of the proposed architecture are discussed. Finally, we have also highlighted potential NGTN use-cases to address the outlined challenges. This would unlock several practical scenarios and critical applications to meet rising business demands. From the implementation perspective, this article would exploit a conceptual blockchain-driven intelligent network that is ready to satisfy a number of NGTN applications that need privacy preservation.
The Internet of Things (IoT) is empowering various sectors and aspects of daily life. Green IoT systems typically involve Low-Power and Lossy Networks (LLNs) with resource-constrained nodes. Lightweight routing protocols, such as the Routing Protocol for Low-Power and Lossy Networks (RPL), are increasingly being applied for efficient communication in LLNs. However, RPL is susceptible to various attacks, such as the black hole attack, which compromises network security. The existing black hole attack detection methods in Green IoT rely on static thresholds and unreliable metrics to compute trust scores. This results in increasing false positive rates, especially in resource-constrained IoT environments. To overcome these limitations, we propose a delta-threshold-based trust model called the Optimized Reporting Module (ORM) to mitigate black hole attacks in Green IoT systems. The proposed scheme comprises both direct trust and indirect trust and utilizes a forgetting curve. Direct trust is derived from performance metrics, including honesty, dishonesty, energy, and unselfishness. Indirect trust requires the use of similarity. The forgetting curve provides a mechanism to consider the most significant and recent feedback from direct and indirect trust. To assess the efficacy of the proposed scheme, we compare it with the well-known trust-based attack detection scheme. Simulation results demonstrate that the proposed scheme has a higher detection rate and low false positive alarms compared to the existing scheme, confirming the applicability of the proposed scheme in green IoT systems.
The rapid evolution of drone communication systems necessitates the development of novel approaches for optimal beam management in millimetre wave (mmWave) 6G networks. Beamforming is used to improve signal quality and enhance the signal-to-noise ratio (SNR); however, the existing beam management performs an exhaustive search over the pre-defined codebook, resulting in higher latency due to training overhead that makes it impractical for high-mobility applications. Therefore, this paper introduces an innovative technique for mmWave beam prediction, considering practical visual and communication scenarios. The approach proposed in this study utilizes computer vision (CV) and ensemble learning via stacking, combining multi-modal vision sensing and positional data to achieve accurate estimations of drone positions and orientations. The developed framework first fine-tunes "you look only once"version 5 (YOLO-v5), a CV model to obtain the bounding box (location) of the drone from RGB images. This filtered vision sensing information and position data are used to train two different sets of neural networks, and the output of each model is stacked to train a meta-learner, used for the prediction of K-beams from a pre-defined codebook. The proposed method outperforms with the top-1 accuracy of approximately 90% compared to 86% and 60% for vision and position models, respectively. Furthermore, top-3 and top-5 accuracies are approximately 100%, resulting in a significant receive signal strength
Abstract—5G spectral efficiency requirements foresee network densification as a potential solution to improve capacity and throughput to target next-generation wireless networks (NGWNs). This is achieved by shrinking the footprint of base stations (BSs), effective frequency reuse, and dynamic use of shared resources between users. However, such a deployment results in unnecessary handovers (HOs) due to the cell size decrements, and limited sojourn time on a high train mobility. In particular, when a train speedily passes through the BS radio coverage footprints, frequent HO rate may result in serious communication interruption impacting quality of service (QoS). This paper proposes a novel context-aware HO skipping that relies on passenger mobility, trains trajectory, travelling time and frequency, network load and signal to interference and noise ratio (SINR) data. We have modelled passenger traffic flows in cardinal directions i.e, north, east, west, and south (NEWS), in a novel framework that employs realistic Poisson point process (PPP) for real-time mobility patterns to support mobile networks. Spatio-temporal simulations leveraging NEWS mobility prediction model with machine learning (ML) where support vector machine (SVM) shows an accuracy of 94.51%. ML-driven mobility prediction results integrate into our proposed scheme that shows comparable coverage probability, and average throughput to the no skipping case, while significantly reducing HO costs.
Due to the cell size decrements and limited sojourn time on a high speed train mobility, unnecessary handovers (HOs) occur, which can lead to higher network communication costs, and affect passengers quality of service (QoS). This paper proposes a novel blockchain-enabled privacy preserving HO skipping framework by using train mobility dataset from the city of London. Using a complex dataset parameters, passenger traffic flows are modelled by averaging various train lines and station's footfall numbers utilising blockchain to maintain privacy. The framework stores pseudonym addresses in order to track the path of users. The proposed framework allows for a better trade-off in terms of 2% (approx.) gain in average throughput, over 100% gain in the last-hop signal quality, and a 50% reduction in HO costs, while also addressing the needs for resources to operate the blockchain.
This paper introduces handover (HO) skipping topology analysis that adjust the HO Skipping of 5G and beyond applications to improve the overall network performance and diminish negative effects. We propose a novel Poisson point process (PPP) based context-aware HO skipping approach to focus on the impact of HO metrics such as, passenger’s trajectory, different velocities, and a mean-time of a passenger within a BS to maintain a good quality of service (QoS). Our proposed scheme, context-aware HO skipping enables a dynamic HO skipping where the skipping decision is taken based on the load of the BSs along the passenger’s trajectory. The parameters have been analysed and implemented in a dynamic simulator and have been investigated for different parameter sets in a high-speed railway simulation scenario. Our simulation results in the robustness of the framework show comparable coverage probability on various high train velocities and mean-times.
With the advent of Coronavirus Disease 2019 (COVID-19), the world encountered an unprecedented health crisis due to the severe acute respiratory syndrome (SARS) pathogen. This impacted all of the sectors but more critically the transportation sector which required a strategy in the light of mobility trends using transportation modes and regions. We analyse a mobility prediction model for smart transportation by considering key indicators including data selection, processing and, integration of transportation modes, and data point normalisation in regional mobility. A Machine Learning (ML) driven classification has been performed to predict transportation modes efficiency and variations using driving, walking and transit. Additionally, regional mobility by considering Asia, Europe, Africa, Australasia, Middle-East, and America has also been analysed. In this regard, six ML algorithms have been applied for the precise assessment of transportation modes and regions. The initial experimental results demonstrate that the majority of the world's travelling dynamics have been contrastively shaped with the accuracy of 91.21% and 84.5% using Support Vector Machine (SVM) and Random Forest (RT) for different transportation modes and regions. This study will pave a new direction for the assessment of transportation modes affected by the pandemic to optimize economic benefits for smart transportation.
Wireless Mesh Networks (WMNs) are considered self-organizing, self-healing, and self-configuring networks. Despite these exciting features, WMNs face several routing challenges including scalability, reliability and link failures, mobility, flexibility, and other network management issues. To address these challenges, WMNs need to make programmable to allow modifications of standard techniques to be configured and implemented through software programs that can be resolved by integrating Software Defined Networking (SDN) architecture. SDN, being a cutting-edge technology promises the facilitation of network management as well as routing issues of wireless mesh networks. However, the evolution of the legacy IP-based network model in its entirety leads to technical, operational, and economic problems that can be mitigated by full interoperability between SDN and existing IP devices. This study introduces a Robust Routing Architecture for Hybrid Software-Defined and Wireless Mesh Networks (Soft-Mesh), by systematic and gradual transitioning of WMNs to SDNs in an efficient manner. The main objective of this paper is to suggest improvements to the architecture of the SDN node that allow the implementation of various network functions such as routing, load balancing, network control, and traffic engineering for the hybrid SDN and IP networks. Mininet-WiFi Simulator is used to perform various experiments to evaluate the performance of proposed architecture by creating a hybrid network topology with a varying number of nodes that is 50, 100, 150, 200, and 250 including SDN hybrid and legacy nodes with varying proportion of SDN hybrid and legacy nodes. Results are taken for the average UDP throughput, end-to-end delay, packet drop ratio, and routing overhead while comparing with traditional routing protocols including Optimized Link State Routing (OLSR) and Better Approach to Mobile Adhoc Networking (BATMAN) and with existing hybrid SDN/IP routing architectures including Hakiri and wmSDN. The analysis of simulation results shows that the proposed architecture Soft-Mesh outperforms in terms of the aforementioned performance metrics than the traditional and exiting hybrid routing protocols. Soft-Mesh gives 50% to 70% improved results concerning the incremental proportion of SDN hybrid nodes.
We propose a reinforcement learning-based cell switching algorithm to minimize the energy consumption in ultra-dense deployments without compromising the quality of service (QoS) experienced by the users. In this regard, the proposed method can intelligently learn which small cells (SCs) to turn off at any given time based on the traffic load of the SCs and the macro cell. To validate the idea, we used the open call detail record (CDR) data set from the city of Milan, Italy, and tested our algorithm against typical operational benchmark solutions. With the obtained results, we demonstrate exactly when and how the proposed method can provide energy savings, and moreover how this happens without reducing QoS of users. Most importantly, we show that our solution has a very similar performance to the exhaustive search, with the advantage of being scalable and less complex.
As a promising next-generation network architecture, named data networking (NDN) supports name-based routing and in-network caching to retrieve content in an efficient, fast, and reliable manner. Most of the studies on NDN have proposed innovative and efficient caching mechanisms and retrieval of content via efficient routing. However, very few studies have targeted addressing the vulnerabilities in NDN architecture, which a malicious node can exploit to perform a content poisoning attack (CPA). This potentially results in polluting the in-network caches, the routing of content, and consequently isolates the legitimate content in the network. In the past, several efforts have been made to propose the mitigation strategies for the content poisoning attack, but to the best of our knowledge, no specific work has been done to address an emerging attack-surface in NDN, which we call an interest flooding attack. Handling this attack-surface can potentially make content poisoning attack mitigation schemes more effective, secure, and robust. Hence, in this article, we propose the addition of a security mechanism in the CPA mitigation scheme that is, Name-Key Based Forwarding and Multipath Forwarding Based Inband Probe, in which we block the malicious face of compromised consumers by monitoring the Cache-Miss Ratio values and the Queue Capacity at the Edge Routers. The malicious face is blocked when the cache-miss ratio hits the threshold value, which is adjusted dynamically through monitoring the cache-miss ratio and queue capacity values. The experimental results show that we are successful in mitigating the vulnerability of the CPA mitigation scheme by detecting and blocking the flooding interface, at the cost of very little verification overhead at the NDN Routers.
Due to the expeditious inclination of online services usage, the incidents of ransomware proliferation being reported are on the rise. Ransomware is a more hazardous threat than other malware as the victim of ransomware cannot regain access to the hijacked device until some form of compensation is paid. In the literature, several dynamic analysis techniques have been employed for the detection of malware including ransomware; however, to the best of our knowledge, hardware execution profile for ransomware analysis has not been investigated for this purpose, as of today. In this study, we show that the true execution picture obtained via a hardware execution profile is beneficial to identify the obfuscated ransomware too. We evaluate the features obtained from hardware performance counters to classify malicious applications into ransomware and non-ransomware categories using several machine learning algorithms such as Random Forest, Decision Tree, Gradient Boosting, and Extreme Gradient Boosting. The employed data set comprises 80 ransomware and 80 non-ransomware applications, which are collected using the VirusShare platform. The results revealed that extracted hardware features play a substantial part in the identification and detection of ransomware with F-measure score of 0.97 achieved by Random Forest and Extreme Gradient Boosting.