The steady success of the Internet has led to an enormous rise in the volume of electronic text records. Sensitive tasks are increasingly being used to organize these materials in meaningful bundles. The standard clustering approach of documents was focused on statistical characteristics and clustering using the syntactic rather than semantic notion. This paper provides a new way to group documents based on textual similarities. Text synopses are found, identified, and stopped using the NLTK dictionary from Wikipedia and IMDB datasets. The next step is to build a vector space with TFIDF and cluster it using an algorithm K-mean. The results were obtained based on three proposed scenarios: 1) no treatment. 2) preprocessing without derivation, and 3) Derivative processing. The results showed that good similarity ratios were obtained for the internal evaluation when using (txt-sentoken data set) for all K values. In contrast, the best ratio was obtained with K = 20. In addition, as an external evaluation, purity measures were obtained and presented V measure of (txt). -sentoken) and the accuracy scale of (nltk-Reuter) gave the best results in three scenarios for K = 20 as subjective evaluation, the maximum time consumed with the first scenario (no preprocessing), and the minimum time recorded with the second scenario (excluding derivation)....
Real-Time Video Distribution (RTVD) is now the focus of many applications, including video conferencing, surveillance, video broadcasting, etc. This paper introduces a multi-IP camera-based method for distributing video signals to several levels using a hybrid client/server with peer-to-peer. There are four primary functions in the proposed structure: firstly, all linked camera transmissions will be received by the central server, and video signals will be shown to all attached clients and servers in Level two. The second function means that the clients/servers in level two start viewing received video signals and rebroadcasts them to the next level. The third function is clients/servers in level three who receive the video signals from the upper level and rebroadcast them to the fourth level. The fourth level consists of many clients that start viewing received video signals from level three. Therefore, the planned architecture and mechanism of the proposed system can provide the admin the capability to concentrate on the frames obtained from IP cameras to/from the central server. Furthermore, this mechanism can register and store the stream of frames and encode these frames during transmission. The proposed system depended on VC# programming language as a dependent tool that relied on various architectures and algorithms....
Strong parallelism can minimize computation time whilst increasing the cost of synchronization. It's vital to keep track of how processes and threads are working. It is understood that thread-based systems improve the productivity of complex operations. Threading makes it easier to the main thread to load, thus enhancing system performance. This paper focuses on the development of a system that has two main stages: monitoring and controlling of a program which have ability to run on a number of multicore system architectures, including those with (2, 4, and 8) CPUs. The algorithms associated with this work are built to provide the ability of: providing dependent computer system information, status checking for all existing processes with their relevant information, and run all possible processes/threads cases that compose the user program that might include one of these cases (Single-Processes/Single-Threads, Single-Process/Multi-Thread, Multi-Process/single-Thread, Multi-Process/Multi-Thread and Multi-Process/ Single-Multi-Thread). The monitoring phase provides complete information on User Program (UP) with all its processes and threads, such as (Name, ID, Elapsed Time, Total CPU Time, CPU usage, User Time, Kernel Time, Priority, RAM size, allocated core, read bytes and read count). The controlling phase controls the processes and threads by suspending/resuming/killing them, modifying their priority, and forcing them to a particular core....
Spike sorting is a technique used to detect signals generated by the neurons of the brain and to classify which spike belongs to which neurons. Spike sorting is one of the most important techniques used by the electrophysiological data processing. Spike Sorting Algorithms (SSA) are created to differentiate the behavior of one or more neurons from background electric noise using waveforms from one or more electrodes in the brain. This sorting comes out as having an essential role in extracting information from extracellular recordings in the neurosciences research community. There are many steps for Spike sorting algorithm (Detection, feature extraction, and Clustering). One of the most important things in spike sorting is the accuracy of the classification for neuron spikes. This article gives a brief overview of the spike sorting algorithm, and the contribution of this paper [email protected] a comprehensive overview of the previous works on the spike sorting [email protected] steps [email protected] (Detection, Feature extraction, and Clustering). The used new techniques to solve the problem of overlapping. On the other hand, previous works used real-time or online spike sorting instead of offline spike sorting. The previous researchers used machine learning algorithms for automatic classification for the spike sorting....
Multi Input Multi Output MIMO and Orthogonal Frequency Division Multiplexing OFDM based communication strategies have gained popularity in the industry and research fields due to the rising demand for wireless cellular communication. Massive MIMO-OFDM-based cellular communication has recently become popular in a variety of realtime applications. However, as the need for high-quality communication grew, so did the number of users, posing various problems. In this article, we presented a comprehensive review study about Massive MIMO-OFDM based communication techniques and development. We mainly focus on essential parameters of Massive MIMO-OFDM, which play an essential role in 5G communication such as PAPR, precoding, channel estimation, and error-correcting codes. The paper shows results on the energy efficiency for a wireless (MIMO) link operating at millimeter-wave frequencies (mmWave) in a typical 5G scenario, showing the impact of the above essential parameters on the 5G performance. This comprehensive review and comparison will research 5G development and applications to adopt the proper techniques to achieve their demand....
One of the more significant recent major progress in computer science is the coevolution of deep learning and the Semantic Web. This subject includes research from various perspectives, including using organized information inside the neural network training method or enriching these networks with ontological reasoning mechanisms. By bridging deep learning and the Semantic Web, it is possible to enhance the efficiency of neural networks and open up exciting possibilities in science. This paper presents a comprehensive study of the closest previous researches, which combine the role of Deep Learning and the performance of the Semantic web, which ties together the Semantic Web and deep learning science with their applications. The paper also explains the adoption of an intelligent system in Semantic Deep Learning (SemDeep). As significant results obtained from previous works addressed in this paper, it can be notified that they focussed on real-time detection of phishing websites by HTML Phish. Also, the DnCNN, led by ResNet, achieved the best results, Res-Unit, UNet, and Deeper SRCNN, which recorded 88.5% SSIM, 32.01 percent PSNR 3.90 percent NRMSE....
Population growth and the creation of new equipment are accompanied by a constant increase in energy use each day and have created significant consumer issues in energy management. Smart meters (SM) are simply instruments for measuring energy usage and are a significant resource of the evolving technological energy management system. Including precise billing data, information on usage at the user end, installation of two-way communication. SM is the critical component of an intelligent power grid. The Internet of Things (IoT) is a critical partner in the power business leading to intelligent resource management to ensure successful data collection and use. This paper proposes designing and analyzing intelligent energy management systems based on Multi-Agent (MA) and Distributed IoT (DIoT). An efficient approach is proposed to monitor and control power consumption levels of the proposed case study of Duhok Polytechnic University (DPU). DPU consists of Presidency, six colleges, and eight institutes. These fifteen campuses are distributed through a wide geographical area with long distances between each campus (i.e., more than 100 Km). A Node represents each campus, and Wi-Fi makes the connection inside each node. These nodes are connected via the Internet to the Main Control Unit (MCU) represented by Raspberry Pi connected to the cloud. Depending on the received data from the Nodes, the MCU will make the correct decision for each node using intelligent algorithms and the user's requirement. Then, control commands are initiated, and the node's appliances can be controlled automatically (or even manually) from the MCU....
Mobile core networks are facing exponential growth in traffic and computing demand as smart devices, and mobile applications become more popular. Caching is one of the most promising approaches to challenges and problems. Caching reduces the backhauling load in wireless networks by caching frequently used information at the destination node. Furthermore, proactive caching is an important technique to minimize the delay of storing planned content needs, relieving backhaul traffic and alleviating the delay caused by handovers. The paper investigates the caching types and compared caching techniques improvement with other methods used to improve 5G performance. The problems and solutions of caching in 5G networks are explored in this research. Caching research showed that the improvement with caching will depend on load, cache size, and the number of requested users who can get the required results by a proactive caching scheme. A significant decrease in traffic and total network latency can be achieved with caching....
The objective of this study is to evaluate cluster-based performance under ultra-dense HTTP traffic. This paper provides a performance analysis of the existing load balancing algorithms (Round Robin, Least Connection, and IP-Hash/Source) in cluster-based web servers. The performance testing process is operated with Apache-Jmeter 5.1 and distributing technique to realize ultra-dense load (100000-500000 HTTP requests) in a real network. Generally, the results indicated the proposed Nginx-based cluster is more responsive, stable, and consumes fewer resources concerning Response Time, Throughput, Standard Deviation, and CPU Usage measurements. While in terms of Error Rate, the Apache-based cluster is more efficient. Moreover, with the Nginx-based cluster, the Round Robin algorithm provided slightly better performance. In contrast, the IP-Hash algorithm outperformed the other two algorithms for the Apache-based cluster in terms of all the utilized metrics....
Dew Computing (DC) is a comparatively modern field with a wide range of applications. By examining how technological advances such as fog, edge and Dew computing, and distributed intelligence force us to reconsider traditional Cloud Computing (CC) to serve the Internet of Things. A new dew estimation theory is presented in this article. The revised definition is as follows: DC is a software and hardware cloud-based company. On-premises servers provide autonomy and collaborate with cloud networks. Dew Calculation aims to enhance the capabilities of on-premises and cloud-based applications. These categories can result in the development of new applications. In the world, there has been rapid growth in Information and Communication Technology (ICT), starting with Grid Computing (GC), CC, Fog Computing (FC), and the latest Edge Computing (EC) technology. DC technologies, infrastructure, and applications are described. We’ll go through the newest developments in fog networking, QoE, cloud at the edge, platforms, security, and privacy. The dew-cloud architecture is an option concerning the current client-server architecture, where two servers are located at opposite ends. In the absence of an Internet connection, a dew server helps users browse and track their details. Data are primarily stored as a local copy on the dew server that starts the Internet and is synchronized with the cloud master copy. The local dew pages, a local online version of the current website, can be browsed, read, written, or added to the users. Mapping between different Local Dew sites has been made possible using the dew domain name scheme and dew domain redirection....
The continuing success of the Internet has led to an enormous rise in the volume of electronic text records. The strategies for grouping these records into coherent groups are increasingly important. Traditional text clustering methods are focused on statistical characteristics, with a syntactic rather than semantical concept used to do clustering. A new approach for collecting documentation based on textual similarities is presented in this paper. The method is accomplished by defining, tokenizing, and stopping text synopses from Wikipedia and IMDB datasets using the NLTK dictionary. Then, a vector space is created using TFIDF with the K-mean algorithm to carry out clustering. The results were shown as an interactive website....
The biological human brain model was used to inspire the idea of Artificial Neural Networks (ANNs). The notion is then converted into a mathematical formulation and then into machine learning, which is utilized to address various issues throughout the world. Moreover, ANNs has achieved advances in solving numerous intractable problems in several fields in recent times. However, its success depends on the hyper-parameters it selects, and manually fine-tuning them is a time-consuming task. Therefore, automation of the design or topology of artificial neural networks has become a hot issue in both academic and industrial studies. Among the numerous optimization approaches, evolutionary algorithms (EAs) are commonly used to optimize the architecture and parameters of ANNs. We review several successful, well-designed strategies to using EAs to develop artificial neural network architecture that has been published in the last four years in this paper. In addition, we conducted a thorough study and analysis of each publication. Furthermore, details such as methods used, datasets, computer resources, training duration, and performance are summarized for each study. Despite this, the automated neural network techniques performed admirably. However, the long training period and huge computer resources remain issues for these sorts of ANNs techniques....
Parallel Distributed Processing is a relatively new method. Distributed cloud connects data and applications delivered by utilizing cloud computing technologies from several geographical locations. When something is distributed in IT, it is transferred among multiple systems located across diverse areas. The expanse of information besides the interval it takes towards analyzing adding to monitoring projected results efficiently and effectively has been dramatically enhanced. This paper presented a system to assist users in doing composite tasks interactively with the least processing time. Distributed-Parallel-Processing (DPP) and CloudComputing (CC) are the two most great technologies, can process and answer the user problem quickly. Developing a suggested system used several sources (source generators, under-test load, computing machines, and processing units) and (webservers) through the cloud. Hash codes are generated on the client-side and sent towards the webserver. The webserver delivers these does to cracking servers that have been specified. It has been verified that while employing light load (single hash-code) with multi-servers and multiprocessors, the suggested system gives improved efficiency (in terms of Kernel-burst, User-bust, and Total-execution) timings. Although it has been demonstrated that employing large loads (many under-testing codes) with many computing machines using multiprocessors improves the system’s performance. The suggested computational system outperforms it in terms of parallel processing. The proposed method took these situations into account due to the code-breaking influences by two precarious criteria (minimal breaking-time besides cost-effective usage of computer resources)....
The popularity of mobile applications is rapidly increasing in the age of smartphones and tablets. Communication, social media, news, sending emails, buying, paying, viewing movies and streams, and playing games are just a few of the many uses for them. Android is currently the most popular mobile operating system on the planet. The android platform controls the mobile operating system market, and the number of Android Mobile applications grows day by day. At the same time, the number of attacks is also increasing. The attackers take advantage of vulnerable mobile applications to execute malicious code, which could be harmful and access sensitive private data. Security and privacy of data are critical and must be prioritized in mobile application development. To cope with the security threats, mobile application developers must understand the various types of vulnerabilities and prevention methods. Open Web Application Security Project (OWASP) lists the top 10 mobile applications security risks and vulnerabilities. Therefore, this paper investigates mobile applications vulnerabilities and their solutions....