Authors - Dharv Prajapati, Nikita Bhatt, Amit Thakkar, Dhaval Bhoi Abstract - Cassava is the third highest carbohydrate food after rice and maize. Due to various plant diseases found in Cassava, security threat is posing in developing country. To stop spoiling the whole plant, a lot of effort has been made by researchers to identify early leaf disease as agricultural farming is closely associated with every country’s economy. Recently, Machine Learning (ML) models have achieved a lot of success with big data, available resources and improvement in learning algorithms. However, there are invalid instances in big data like ambiguous or mislabelled or irrelevant. Though performance of ML model is not degraded if such instances are small as effect of the average gradient is small during the training. But the performance of system is degraded if quantity of invalid instances is large. The existing work for leaf disease detection is performed using Model Centric approach, where hyper parameter tunning is performed to enhance the performance of system. Recently, focus in changed from Model Centric to Data Centric approach, where model is fixed but quality of dataset in enhanced by considering the consistency of labels, systematic sampling of training data and selection of appropriate batches. This is invaluable step towards improvement of any system. In this work, noise label detection and correction is performed on Cassava Leaf Disease Classification dataset using confidence learning. The generated quality data is given to model, and performance comparison is made between model centric and data centric approach, which concludes that the performance of data centric is improved by 6.33%. The conclusion of this work is not to downgrade the significance of model centric, but to showcase the neglected potential of enhancing the performance of such systems using data centric approach.
Authors - Krishna Vijay Singh, Ish Nath Jha, Amit Kumar Roy Abstract - Software-Defined Networking (SDN) is a recent networking technology with promising properties related to the weaknesses in traditional networks. However, it mainly relies on the OpenFlow protocol that is developed for a wired network. The forwarding devices of SDN uses OpenFlow protocol to communicate their state to the controller and acquire forwarding decision from it. Under the given circumstances frames originating from Wireless protocols such as IEEE 802.11, ZigBee, and 6LoWPAN cannot be handled appropriately through the OpenFlow protocol. Moreover, the existing SDN controllers are also not designed to control and provision the networks that are obeying the heterogeneous wireless protocols. In this paper, we review the SDN-based efforts for handling the handover-related issues that may be due to mobility or data offloading from one wireless technology to another. More specifically, the three categories of networks: Wireless LAN, Wireless sensor, and cellular are considered in this article. After delving into the present SDN-based handover management efforts, the opportunities to further improve the operation and performance of handover management with the help of SDN are also presented.
Authors - Mridula Korde, Jagdish Kene, Minal Ghute Abstract - The wireless communication is the indispensable aspect of today’s lifestyle due to extensive use on online communication and online services. The transmitter and receiver communicate with each other through wireless channel. Over the past few years, the modeling of wireless channels for radio wave propagation over difficult geographical areas like hilly areas, sea surface has become the challenging tasks for the researchers. The performance of transmitter and receiver can be optimized by means of channel modeling. There are some performance parameters like shadowing, fading, multipath propagation, Doppler shift etc. to evaluate effectiveness of wireless communication system. In this paper, mechanism of Okumara-Hata channel model is discussed in relation to the operational principles. Here mainly the Okumura-Hata channel model is discussed in detail in with respect to the path loss. This paper studies the behaviour of path loss with variation in the parameters like base station antenna heights, separation between receiver and transmitter distance and frequency of the transmitted signal.
Authors - Pooja Sureshappa Shivankar, Ganesh K. Pakle, Madhav V. Vaidya Abstract - The technique of categorizing messages based on the emotions they represent is the focus of this project's issue of sentiment classification on Twitter. Twitter is a networking site and microblogging service where users can post status updates with a character limit of 140. More than 200 million people have registered on the website, among which 100 million were active users and 50% of those logs in daily, resulting in the publication of more than 250 million tweets daily. By examining the emotions expressed in tweets, which are widely used, we hope to express the sentiment of a broader public. There are many applications that call for the analysis of public opinion, including businesses trying to predict how the marketplace will respond to their products, forecasting political outcomes, and the analysis of social processes like stock exchange.
Authors - Shradha Nilawar, Madhav Vaidya, Ganesh Pakle Abstract - In this study, handwritten Hindi characters are recognized using a convolutional neural network that is CNN based technique. After being recognized, the characters can be used in a variety of ways and stored digitally on your computer. The characters in these pictures are all written in the Devanagari script. Every one of the 46-character class has 2000 patterns. The training set makes up 85% of the data set, whereas the test set makes up 15%. OCR systems' classification algorithms can be tested using image data sets. The highest Curacy score in the test set was 98.47. For the development and testing of handwritten text recognition systems, it offers a sizable collection of Devanagari handwriting styles created by numerous authors. Three completely connected detection layers are added after his four CNN layers. The input takes the form of a handwritten image in grayscale. Utilize filters to extract unique information from each layer in your photos. Convolution is used to achieve this. The processes of bunching and flattening are also crucial. A fully linked layer receives the output of the CNN layer and processes it. The character with the highest score is shown as the out-come after computing the chance or probability value for each character. There are 98.94 curates for acknowledgment. For this goal, there are already models that are similar, but the new model was more effective and precise than some of the earlier versions.
Authors - Lavika Goel, Siddhi Kumari Sharma, Namita Mittal, Atulya Raj, Sachi Pandey Abstract - Facial emotion recognition is a vital piece of brain science, criminology, and web-based media. In every one of these fields, a serious level of unwavering quality of order is basic. Presently human judgment is utilized in these fields however people are not generally precise. Subsequently, there is a requirement for a solid and speedy method of distinguishing human feelings. The new advances in AI and example acknowledgment have offered a few calculations to perceive human feelings like Local Binary Pattern (LBP), Convolutional Neural Network (CNN), GLCM for feature extraction, and SVM for Classifier. A facial emotion recognition software that detects five basic emotions of a face is implemented in this paper. We had learned many machine learning technologies and algorithms. Along with that, we learned technologies like sci-kit-learn, Keras, OpenCV that were used in the implementation of our paper. In this paper, we used technologies like LBP, GLCM, and Gabor filter to extract aspects of images. For training the model, we used the CK+ dataset that contains 24282 images in the training set. In this, Support Vector Machine (SVM) is applied for the classification of emotions using the features extracted from the above technologies.
Authors - Kumar Selvakumaran, Aman Sami, Anand K, S.Sathyalakshmi Abstract - The need for explainability is especially critical as deep learning solutions become more prevalent. Creating interpretations and visualizations of features captured by convolutional neural networks (CNNs) for various computer vision applications is becoming an increasingly difficult challenge due to the growing complexity of CNNs. Nowadays, understanding the behaviour of complex machine learning models is highly essential, as important responsibilities in different fields are being taken over by artificial intelligence systems. Consequently, many explainability techniques have come into existence to make machine learning models more interpretable, such as occlusion methods, local surrogate models, pixel attribution, etc. This paper aims to convey the significance of explainable computer vision and its application in different fields, with a focus on safety surveillance. This objective is achieved through the agency of an explainability technique called EigenCAM, and YOLOv5, which is the object detection framework whose predictions are explained.
Authors - Manish Kumar Singh, Jawed Ahmed, Kamlesh Kumar Raghuvanshi, M. Afshar Alam Abstract - The fake news related events are on high rise all across the world, particularly in India. There has been a lot of multidisciplinary research taking place in this area. Nevertheless, Artificial Intelligence based research is considered to provide better solutions to detect fake news on an automated basis. However, a large sized benchmark dataset is pertinent for running machine learning algorithms. Further, the dataset used for training and testing machines to spot fake news is limited to the geopolitical events and the linguistic aspects of a particular state or nation. There has been a good amount of work done in the area of building fake news related datasets in different parts of the world. However, a benchmark dataset covering fake news events in India for carrying out fake news related research is still lacking. This paper introduces BharatFakeNewsKosh, a benchmark data repository of fake news events in India, for training and testing machine learning models to detect fake news. Further, the key machine learning techniques are used to detect fake news using BharatFakeNewsKosh dataset and the resulting performance is compared with that of the existing benchmark datasets that are so far widely utilized for studying fake news all across the globe.
Authors - Sunil Kumar, Niraj Singh Mehta Abstract - In this paper, a scheme for realizing switch duty cycle above a normal 0.5 by including the load side voltage in series with the supply voltage for the demagnetization of the transformer core has been presented. A wide duty cycle is essential in many converters, especially, in a forward converter. The usefulness of the present scheme is that no extra circuitry is required for this purpose. Also as the load voltage is stepped up, the duty cycle widens correspondingly. A 24 W, 24 V output converter is analyzed and presented, to validate the proposed idea through simulation results.
Authors - Kriti Suneja, Ayush Garg, Bhola Yadav, Kaustuv Sahu Abstract - This paper discusses the Nose-Hoover chaotic system, followed by the design of an adaptive control and adaptive synchronization scheme. The adaptive control and synchronization laws are designed in order to synchronize a practical chaotic system with unknown system parameters to an ideal system having known parameters. The designed update laws are first implemented and verified using python, followed by their implementation on Xilinx Kintex-7 device using Xilinx Vivado HLS tools, and post synthesis device utilization and timings are discussed.
Authors - Purvi Prajapati, Amit Thakkar, Nirav Bhatt, Nikita Bhatt Abstract - In high dimensional data, one of the challenges is to tackle high dimensional feature and label space. Currently research is going on Extreme Multi-Label Classification, which is supervised machine learning algorithm focusing on high dimensional data. “Extreme Multi-Label Classification” is the extension of Multi-Label Classification with high dimensional label space. One of the objectives is to extract important labels from high dimensional label space where traditional Multi-Label Classification approaches are failed. It is used in many classification applications such as Wikipedia Categorization, Product Recommendations, Tagging Applications, Search Query Recommendations, and Word Recommendations etc. This paper covers challenges and approaches to handling the high dimensional Classification problem followed by potential directions for the improvements of the classifier.
Authors - Aashrit Garg, Anita Shrotriya Abstract - As the processing power of modern-day computers is increasing at an exponential rate, computers are able to process data at increasingly high speeds. Combine the speed with ingenious algorithms and computers are able to surpass the limits of even the human mind. One example of this is the game of Chess where computers have proven to be a worthy foe to even the smartest of people. This fact is not so surprising to most people because they see the potential of artificial intelligence and machine learning algorithms. However, machines have been defeating humans for quite some time now, and not high tech super computers but regular personal computers. How are computers performing better than humans in the most complex game on this planet while all our lives we have learned that computers are ‘dumb machines’ which do exactly what a programmer says. Well, a programmer cannot code all 10^120 possible games of chess so computers have to rely on two things: math and computation power. The computer uses brute force to analyse all the outcomes of the game by creating a search tree. It evaluates the search tree to find the best possible moves that would enhance the chances of winning through the minimax algorithm. However, the search tree constructed is huge and to analyse every node in the minimax algorithm is not feasible. This is where alpha-beta pruning comes to picture. It is a search algorithm that decreases the number of nodes to be evaluated by the minimax algorithm in its search tree. As alpha-beta pruning became more popular there were extensions made to it to increase the potency of it even further. The two mentioned in the thesis are the Transposition Tables and History Heuristic. This paper explains in detail about how they function and how effective they are at achieving the expected results.
Authors - Kritarth Kapoor, Samridhi Singh, Nagendra Pratap Singh, Priyanka Rathee Abstract - Typically, bell pepper growers are unaware of bacterial spot disease on their plants. As the condition progresses, the yield usually declines. Like a virus or wilt, pepper-related diseases will obliterate your entire garden. The best course of action when there are problems with the pepper crop is to remove the sick plant before it spreads to the rest of the garden. For the same purpose, we need to have a good identification model which can differentiate between a healthy image of the leaf and an unhealthy leaf. The method of processing digital data in the form of pixels is known as image processing. Due to the complexity of the data, plant disease identification is the main problem with image processing. In this study, we employed two deep learning models, AlexNet and VGG16, to identify leaf illnesses in bell pepper plants which show good accuracy of 97.80% and 99.38% respectively.
Authors - Ankush, Samridhi Singh, Nagendra Pratap Singh, Priyanka Rathee Abstract - Skin cancer is a severe health issue. Thus, the major concern of physicians is to investigate a precise clinical diagnosis. At present, some mechanisms are developed in the area of image processing with the help of algorithms and systems to detect and classify skin cancer. Computer-based technology provides a relaxed, inexpensive and quick diagnosis of skin cancer symptoms. Several techniques, non-invasive in nature, have been proposed to investigate the symptoms of skin cancer, whether they represent melanoma or non-melanoma. The general process applied in skin cancer detection is image acquisition, pre-processing, segmentation of the acquired pre-processed image, extraction of the required features and classification of disease skin cancer. After implementing the model, we get an accuracy of 88.48% in the case of AlexNet and 90.41% in the case of VGG 16 respectively.
Authors - Ankita Bansal, Megha Khanna, Laavanaya Dhawan, Juhi Krishnamurthy Abstract - Genetic algorithms (GAs) and Search based algorithms (SBAs) are very powerful optimization methods which are inspired by the success of evolutionary processes in the natural world. These optimization methods have been used to develop effective classifiers and have been successfully applied to various domains. This paper aims to evaluate GA based methods for the task of ARB prediction and analyse their effectiveness for the same. ARBs are software defects that manifest in a software module after prolonged usage. Such bugs are extremely hard to detect through traditional software testing methods and can have a very high impact when encountered during the operation of a software. Predictive models that can analyse a software code and flag the possibility of an ARB can be useful in mitigating the impact of ARBs. In this paper, we present an empirical study, that analyses statistically, the performance of GA/SBA based classifiers for ARB prediction on five datasets. We account for the data imbalance by creating synthetic minority samples using SMOTE. Results of this study show that SBA algorithms are effective in developing predictive models for ARB detection and their performance is comparable to machine learning algorithms (ML) for the same.
Authors - Shashi Pal Singh, Ajai Kumar, Aarti Saxena, Richa Verma Abstract - Language is the primary mode of communication. Communication is the only way to convey our thoughts and emotions to others. But there are many languages that we don't know how to speak and we can't learn all languages so quickly which is why Machine Translation systems were invented to help us communicate with anyone from anywhere. Researchers started working on Machine Translation Systems in the 1950s and have developed various amazing techniques to make our communication easy. Machine Translation Evaluation (MTE) Methodology checks the accuracy of the translations done by Machine Translation Tools. It is extremely important because while developing the translation systems, it constantly checks the performance and helps us to make appropriate changes in the system for bringing accuracy. Mostly it is done by comparing the output of machine translation systems with the translation(s) done by Human beings, there are also some techniques that do not require any reference sentences. In our project, we are using four Automated Machine Translation Evaluation metrics which are TER (Translation Error Rate), METEOR (Metric for Evaluation of Translation with Explicit Ordering), BLEU (Bilingual Evaluation Understudy), and NIST (National Institution of Standards and Technology) for English-Hindi Translation.
Authors - U. Sakthi, K. Thangaraj, M. Anuradha, M.K.Kirubakaran Abstract - With the improvement of Internet of Things (IoT) and edge computing, the smart agricultural system is driven by data produced by the different sensors and smart computing devices in the agricultural land. A new methodological paradigm high performance edge computing is incorporated with blockchain implemented precision agriculture system to improve the data processing operation related with resource management. Edge computing nodes collects and analyses the sensor data locally without transforming to the remote centralized cloud server, which rises data processing speed and reduce the network latency. The blockchain technology is incorporated with ma-chine learning algorithm to maintain secured and protected distributed database for storing smart farm details like pH, soil moisture, temperature, crop management, humidity and water irrigation level. The proposed system improves the productivity of food items and performance of the smart agricultural system by providing useful information to the farmers to make time-based decision about the land and increase the profit.
Authors - Sanskruti S Patil, Mahesh S Patil, Satyadhyan Chickerur, Shantala Giraddi, Seetharam N Shahapur, Anup Hadalageri Abstract - The large-scale model training is typically slower and necessitates high-performance workstations and Distributed Deep Learning (DDL). The DDL models trained on a massive volume of data can outperform single accelerators being used. i.e., the performance of deep learning (DL) models can be enhanced by using distributed and parallel deep learning methods. But the DDL models need to be redesigned and evaluated for particular hardware resources and applications, requiring the literature on various DDL models evaluations for assistance. Authors in the proposed work design and evaluate DDL with data parallelism for classifying Diabetic retinopathy (DR) images. A performance comparative study on the DDL strategy implemented on DenseNet is presented where the model is trained on multi-GPUs with counts from 1 to 8. The results show that the model takes less time when trained on multi-GPUs compared to single GPU. The detailed evaluation analysis of DDL with data parallelism is presented in this paper.
Authors - Shah Gargi B., Sajja Priti S. Abstract - The TTS (Text to Speech) synthesis systems have been developed for Indian languages for a few decades. Very little work has been done specifically for the Gujarati Language. The synthesized speech doesn’t sound as similar to human natural speech. Naturalness is the key parameter to achieving a natural-sounding effect in speech synthesis. This paper proposes a method for improving the naturalness of speech synthesis for the Gujarati language using fuzzy logic. The pause (silence) in-between words is also an important feature of a speech. The pause may not be the same after each word in a sentence. It is dependent upon the characteristics of the language and other parameters of the sentence. In the classic architecture of TTS, fuzzy logic is proposed as a new approach to calculate the pause to be applied after each word. The system takes a sentence or paragraph as an input which has the words Importance, Sentence Size, and Position in Sentence derived variables. The fuzzy logic produces the pause in seconds that can be applied after each word. The membership value of derived variables is calculated using straight-line formula. The developed TTS system is tested on a SARS-CoV-2 Covid-19 news dataset in the Gujarati language. The dataset is designed by collecting the news lines from websites of popular news channels in the Gujarati language. The fuzzy logic is proposed in solving the problem of naturalness in synthesized speech and aiming to achieve a more natural sounding effect in generated speech. This paper describes the implementation of fuzzy logic in achieving naturalness in speech synthesis.
Authors - Dipti Chauhan, Jay Kumar Jain Abstract - In today’s scenario where our world is full of technological instances where every now and then we are sharing our information through networks and hence we are more pruned towards the risk feature. It is not always possible that we will always access private networks or VPNs. Our network is exposed to different types of cyber-attacks in the form of information leakage every time 24 X 7. Networks are evolving to take considerable risk factors. They launch new trends and attacks. These attacks are mainly focused towards the open ports in the networking devices. Researchers are developing networking tools for this purpose like network profiling, vulnerability scanning, network mapping, etc. In order to help intrusion detection systems (IDS) identify fraudulent network traffic, machine learning (ML) has recently gained popularity as a technology. The essence of the ML model's cognitive function depends on the quality of the dataset for training the model. In this research paper, we are using profiling network data and training the dataset using a machine learning-based classification model to classify whether the given traffic is normal traffic or it is an anomaly. Different methods have been like Naive Bayes, Bayes Net, Naive Bayes Multinomial Text, and Naive Bayes Updateable on this data set to build the classifier model. The data are categorized using the Weka 3.8.5 tool, and this trained data set has been used in numerous simulations.
Authors - Vaishali P. Bhosale, Poornima G. Naik, Sudhir B. Desai, Prashik Patekar Abstract - Mobile banking applications have revolutionized the manner in which financial transactions are executed and have made the customers life easy for handling financial transactions which has improved the quality of customer service in banking sector. During the COVID-19 pandemic situation financial trans-actions employing mobile banking apps was at its peak, gained tremendous importance and became more popular. The current paper reviews popular and most frequently used mobile banking services provided by public and private sector banks. The authors have designed and developed a model for secure mobile app for performing the financial transactions securely employing QR code which provides a low-cost solution to the problem under consideration. The CIA (Confidentiality, Integrity and Authenticity) triad is taken care of in the model implementation employing existing security techniques such as hashing, role-based authentication mechanism, prevention of SQL injection attacks and tracking MAC address of a user. The possibilities of QR code hacks are presented and the solutions are proposed. Double spending problem is tracked.
Authors - Deepali Dhaka, Saima Saleem, Monica Mehrotra Abstract - Spammers disseminating obscene content on Twitter have been studied and detected using various hybrid features and machine-learning approaches in past. To have greater insight into data prevailing in form of text on platforms like Twitter, their correct vector representation is paramount. Our goal is to understand what encoding techniques are more suitable for representing long text documents. We proposed a novel deep learning model consisting of a Universal Sentence Encoder (USE) as a feature extractor and an artificial neural network (ANN) as a classifier. We transform all the sentence vectors representing the tweets of a user into a document vector. These vectors are used as high-quality features to be processed by the artificial neural network for classification. To check the effectiveness of our proposed model, different sentence embedding techniques such as Doc2Vec, Infersent, and SentenceBert have been used and compared with the proposed model. Experimental results show that the pro-posed model outperforms all of them in terms of recall, precision, f1-score, and AUROC. Our results show that a simple ANN combined with USE based deep learning approach can be a robust solution for the detection of spammers on Twitter.
Authors - Poornima G. Naik, Kavita S. Oza Abstract - In this COVID pandemic whole academics is shifted in online mode. Along with teaching learning even whole examination system along with question paper setting and evaluation is needs to be automated. Quality of question paper generation is an important aspect of examination system. Courses and course structures needs to be updated regularly to keep up with the recent trends in the subject domain. It is very difficult to track the course and question paper versions as the versions of courses changes frequently. Another important aspect here is to maintain secrecy in paper setting which requires identifying roles and responsibilities of individuals in existing system. Proposed system is designed to keep track of course structure pertaining to different courses, subjects and academic years and each question paper is tagged with the appropriate version of course structure tag which contains a nested tag consisting of internal and external examiner codes. The motive for tagging each question paper with the version information is for reducing table lookups.
Authors - Veena Jokhakar Abstract - This paper portrays the creation of a Multi-Dimensional Cube for Social Justice and Empowerment Department(SJED) that provides loans to the most disadvantaged socioeconomic groups. These officially designated group of people called as minority classes, Scheduled Tribe, Back ward classes and Scheduled Caste apply for loans under various schemes, districts, and villages to receive loans. In this paper we create a cube in sql server analysis services to analyze transactions done by beneficiaries like scheme -wise, location-wise, and project the unit cost or revenue cost. We use SQL Server Management Services, SQL server Integration services and SQL Server Analysis Services for creation multi-dimensional cube.
Authors - Sheenam Naaz, Suraiya Parveen Abstract - Robust connectivity and a good user experience with smart cities & the IoT are just two of the benefits of 5G wireless communication devices’ extensive device capability with incredible information speeds, super dependability & relatively minimal latency. Enhanced mobile broadband services (EMBB), ultra-reliable low-latency communication (uRLLC), and massive machine-type communications (mmMTCs) are just a few examples of how future networks are expected to disrupt already popular applications. For this reason, it is significant to study the eco-friendliness & efficiency of 5G networks in intelligent cities. This article will use a high-level introduction to examine how 5G systems can play a crucial role in helping intelligent cities become environmentally sustainable. This article provides an overview of fifth-generation (5G) communication systems & many 5G approaches utilized in smart city apps to improve sustainability. Next, we dive into the sustainability indicators for 5G networks, including energy usage, carbon footprint, pollution, cost, health, and security, across environmental, social, and economic dimensions.
Authors - Vassil Vassilev, Bal Virdee, Karim Ouazzane, Dion Mariyanayagam, Viktor Sowinski-Mydlarz, Monika Rabka, Herbert Maosa, Sorin Radu Abstract - This paper presents the result of a pilot project of London Metropolitan University, aiming at developing a set of urban data services in support of the local communities in several boroughs of the city of London. They are targeting the health and well-being of the citizens by analyzing different types of information from a number of data sources – environmental sensors, geolocation information, and models of the urban infrastructure. Unlike the complex government projects covering large urban areas, which require significant resources and typically involve large service providers operating public clouds, our project targets local communities with limited capabilities by utilizing the concept of a private cloud running on commodity infrastructure within their reach. By employing a number of proven data technologies, software tools, and AI methods the project delivers a comprehensive picture of urban life. The first phase of the project reported here focuses on outdoor and indoor pollution, which are the keys for addressing many local community activities such as environment protection, urban planning, local transport, communal housing, and social services within the area.
Authors - Chevella Anil Kumar, Kancharla Anitha Sheela Abstract - Communication is essential in our daily lives. It serves as the foundation for all human interactions. It is how people communicate with one another and receive information from them. Anyone cannot share their feelings or understand their thoughts unless they communicate effectively. A person with a Speaking or Hearing problem cannot communicate effectively with others and, as a result, cannot compete with other people. According to the World Health Organization, 63 million people in India are deaf, either completely or partially. To communicate with others, a Deaf or Dumb person usually uses Sign Language. Normal people who do not know Sign language, on the other hand, cannot communicate effectively with them. To Bridge this communication gap, we propose a system is capable of recognizing spoken words and display the corresponding ISL gestures to provide a convenient and real-time communication between a disabled community and normal community human beings using Deep Learning Algorithms.
Authors - V.Hari Sudarsanam, P.Sai Sumanth, K.Vinod, A.Srisaila Abstract - Spamming is a type of cyber-attack that deceives unsuspecting web users to divulge sensitive data including usernames, passwords, credit card and social security information. Attackers trick internet users into giving up personal information by pretending to be a reliable or official website. To date, several solutions have been presented to stop spam including blacklists, whitelists, heuristics, and visual similarity-based algorithms, yet online consumers are still being duped into disclosing important information.
Authors - P. Anusha, V. Yaswanth, G. Shanmukh, Nunna Satya Krishna Abstract - Face Recognition (FR) and Surveillance Video Analytics are well-defined and solved problems in the applications of Computer Vision. Face Recognition aims to identify an already known person in a given image. Surveillance Video Analytics seeks to identify the occurrence of abnormal events or things in public places. But, recognizing the movements of most wanted criminals or suspects in public areas using Face Recognition systems with unclear surveillance video inputs is a very challenging problem. This work analyses the performance of three existing popular machine-learning-based Face Recognition systems. They are (i) Viola-Jones detector, (ii) HOG based FR, and (iii) PCA based FR. This work analyses the performance of these FR models on two different datasets. One is a benchmark dataset that has only the frontal view of faces of various subjects. Another dataset we created with 10000 images. These images are collected from 50 subjects. From each subject, 200 images are taken from various angles. This work observes that the above models will improve their performance from 7-10% in terms of accuracy by training them on the proposed dataset.
Authors - Rashmi Dixit, D.P.Gandhmal Abstract - With the outbreak of Covid-pandemic the engineering education system all over the India has come to a halt. students face a great problem how to go on their studies as were totally closed to maintain social distance which is essential to escape from such epidemic. Later all to go on study the online class system was introduced by which students can get benefits and can fulfil their education. It is a new way to most of the students as the system was unknown to them. This gives rise in the use of digital platforms, online resources as well as the digital modes of instruction to be on vogue. Today's students use short educational videos as a tool to learn everything, Abstract concepts that once seemed difficult to teach and learn are now more approachable and understood. This paper presented experiment done with Theory of Computation course for Second year Computer Science & Engineering Course. Gamification with Short video segments for course wise topics enables more effective processing and memory recall. This blending of Gamification with traditional teaching in terms of videos have a wide appeal because of their visual and aural components
Authors - Ganeshan. M, A. Rajesh Abstract - Serverless computing has emerged as a promising research subject in computer science as it grows in favour among developers because of its low cost and elasticity. However, one primary concern that has cropped up in recent months with serverless computing is vendor lock-in: It can be hard to port to another vendor's platform without considerable effort and cost. Serverless on AWS is a new way to build cloud-native IoT systems that are highly performant, highly resilient, and low-maintenance. We'll implement exciting design concepts for serverless systems on AWS in this paper using the open-source OpenWhisk software. To provide low-cost and maintenance-free IoT services, traditional cloud service providers offer platforms that are progressively migrating toward the serverless approach. OpenWhisk takes care of the infrastructure, servers, and scaling with Docker and containers. Because Apache Open Whisk’s components are built using containers, it provides a wide range of deployment options in cloud environments. Many of today's prominent Container frameworks such as Kubernetes are available as options. The Kubernetes framework is a well-known open-source container-orchestration system widely used in industrial and academic fields.
Authors - Litty Koshy, S.PraylaShyry Abstract - In today’s digital world all kind of enhancement is becoming possible and at the same time the usage of images and videos have been growing day by day in our lives, the enthusiasm to make manipulation of images also increases concurrently. In this study, the most recent technical analysis and observations of various copy-move image and video forgery techniques were carried out. Image splicing, copy-move forgery, and image resampling are the three basic types of image counterfeiting. Copy-Move forgery is commonly used for making tampered photographs. As the forgery of images and video is increasing, it is much essential to develop tools for the detection of such forgeries. This study examines several forms of digital image and video forgeries as well as detection techniques.
Authors - Sephali Mohanty, Trailokyanath Singh, Swarnalata Sitha Abstract - In the present article, an EOQ (Economic Oder Quantity) ordering policy with variable deterioration, generalized exponential demand and time-dependent holding cost is considered with following components: (i) The deterioration rate as well as holding cost is considered as linearly increasing function of time. (ii) The demand is the exponential increasing function of time. (iii) The shortages and all types of backlogged demands are not permitted. Mathematical expressions for the cycle time, inventory level and optimal sys-tem cost are derived approximately. A couple of numerical examples are presented to describe the mathematical model of the system. Sensitivity analysis of one of numerical examples cost with respect to its system parameters are provided.
Authors - Kolipaka V N S Sai Pranavi, Ghanta Naga Durga Jai Rathan, Koppaka Hemanth Durga Prasad, Pellakuri Vidyullatha, P.Haran Babu Abstract - Now-a-days a greater number of users participates is used to create new issues and discussion on social media that form into different kind of groups such as Positive and negative comments. This paper is focused on group of user’s discussions and specifies from which category they belong to. The user messages are parsed in the social media data then they identified network relationships and applied the data mining techniques to a group of different types of communities. The collection of objects is considered as similar or non-similar Clustering structures in unsupervised approaches. The aim of this paper is to develop clusters depends on features and their characteristics that are included in proposed model. This work helps the system to categorize people in groups which also helps to identify people groups that are participated in discussions. This paper shows the clustering algorithms like K-Means, DBSCAN and Agglomerative to cluster data and to find large streams of clustering community messages in social media data. This paper throws light on novel use-case of communities and proposed algorithm that shows best clustering results. This application tells us which group of people saw the post and who gave their opinions on the post, by this we can categorize the users.
Authors - Md. Motahar Hossain, Nitin Pathak Abstract - MSME sector is one of the vibrant sectors of Indian economy. This sector is contributing significantly towards the economy of India. The study aimed at examining the factors that disturb the contribution of small and medium-sized enterprises to the economy of India resulting from COVID-19 pandemic. Some relevant papers have been evaluated and key factors that are firmly opposed to the development or increase of MSMEs’ contribution to the economy have been identified. Moreover, the study found that in relation to GDP employment, and exports, MSMEs contributions towards the economy of India were affected by these factors. The study also proposed some corrective steps to address the sustainability and growth challenges faced by MSMEs working in India.
Authors - Parvathy V, Deepanjali Mishra Abstract - Artificial Intelligence (AI) consists of a machine technology which is associated with using a computer system with a form of minimized Human intervention in order to conduct a behaviour that is more intelligent in nature. Artificial intelligence may also be considered as technique by which the intelligence of a human being forms a coordination with machines especially in the field of computer science. There are various applications of Artificial Intelligence like professional systems, natural language processing, recognition of speech and machine learning. The technology related to Artificial intelligence (AI) and its strategies could prove to be very much useful in almost all domain which are related to human behaviour and intelligence in the field of decision making, health care applications treatment of a medically ill patients, business management in the form of leadership, Emotional Intelligence, Group activity, management research and many more. Healthcare applications is one of the major prominent areas which has gained significance in the present day out of which mental health is considered to be one of the thrust areas. Mental health is one of the most important and vital troubles that desires to be emphasized via all of the healthcare employees. The present day technological revolution is more emphasized in order to understand the capabilities and advancement of development in this field of Artificial Intelligence. Therefore, the basic objective of this paper is to analyse the position of artificial Intelligence generation and its applicability inside the intellectual health of girls the use of a sensible model.
Authors - Pooja P Kadam, Sachin Kadam Abstract - Quantum computers are best known for its outstanding performance. Harnessing the last 10 years of advancement of technology in hardware and software, the computational complexity a measure of time needed to execute complex optimization problems plays significant role in performance of the system. Some of the classical optimization algorithms uses random seeds to converge and sometimes bad random seeds leads to non-convergence and utilize lots of computational resources to solve complex optimization problems. As quantum computing comes with supremacy of computing, in this paper an attempt has been made to explicate and review quantum optimization algorithms such as Genetic Quantum Algorithm (GQA) and Quantum Approximate Optimization algorithm (QAOA) and its comparison with classical optimization algorithms.
Authors - P.Aruna, S.Gayathri Devi, S.Chandia, M.Poongothai Abstract - The Internet of Things (IoT) evolves from mobile computing and ubiquitous computing. IoT is connecting all smart objects with sensors and networks which provide an integrated environment together. Smart objects and wearable devices share and communicate with the help of the internet. Given lots of data which are private are handled in the IoT environment, security and privacy plays a vital role. Though IoT technology is booming, the devices are highly prone to security vulnerabilities. Currently IoT technologies are used in many applications in business, healthcare and agriculture. The adaptation and success of the IoT framework depends on the security it provides. So the study of security aspects in IoT is a major issue which is addressed in this paper. This article discusses the security challenges that could be faced in the IoT environment and the prevention mechanisms used in IoT to keep the privacy and security to create a secure environment.
Authors - Ankan Dutta, Surbhi Pal, Aishwarya Banerjee, Pratap Karmakar, Arpita Mukherjee, Debaprasad Mukherjee, Prabal Kumar Sahu Abstract - In our paper, a review of several techniques for irrigation scheduling like Soil Moisture Level Measurement, Plant-Based Monitoring, Climatological Approach (Irrigation water/Cumulative pan evaporation ratio or IW/ CPE Ratio), and Water Balance Approach has been briefly discussed. Various technologies can be applied to design smart irrigation systems for optimal use of water and reduce wastage of water. Thus, optimal irrigation decision management is a very important issue in today’s scenario. A sustainable irrigation management system can be maintained by the application of machine learning algorithms. State-of-the-art studies on different machine-learning approaches for smart real-time irrigation systems are presented in this paper. The different issues in Machine Learning Models like Unnecessary Overfitting and Underfitting, Accessibility of Web Infrastructure, and Initial Cost of Deployment are also briefly discussed. This study provides an introduction to the field of research, covering irrigation scheduling in smart agriculture, summaries of previous studies, irrigation scheduling techniques, and areas for improvement.
Authors - Adithya H H, Lekhashree HJ, Raghuram S Abstract - Drones are increasing in popularity for many applications in the private and government sectors. New applications such as land surveying, wildlife movement monitoring etc. have been made simpler with the advent of drones in these domains. A majority of the drones are fitted with some sort of vision capture system – such as a video camera and are capturing the activity in an area as dictated by the application domain. In this work, we have added analytics functionality to the drone video stream – in particular, we have created a machine learning model which is capable of detecting physical violence between individuals. With the increase in the application domains of drones, their numbers are set to increase dramatically. An important capability is that of vision, and it can be put to good use if analytics are used to detect certain anomalous events in the drone’s field of vision. While there is an intended application and journey, this can be altered if such events are detected. For example, in the case of physical violence, if any activity of this nature is detected, the drone can stop and continue to capture the activity, instead of continuing on its usual route. This will have the beneficial effect of both capturing the activity and will also have an indirect effect of the perpetrators ceasing to continue the activity, as they are being watched. The proposed solution applied deep learning models for the detection of events of interest. Deep learning has emerged as the model of choice for classification applications. A primary reason for their adoption is because they obviate the complicated feature engineering process, due to their capability of learning features. From our experiments, we see that we are able to achieve a validation accuracy of over 92 % on our dataset.
Authors - Ujwal Joshi, Gargi Phadke Abstract - Different control methodologies are evolved over time for robust and stable performance process control for various applications. Before the implementation of optimized multi variable control strategy such as Model Predictive Control (MPC) in process control, other methods like Proportional Integral Derivative (PID) controllers were in use to control the process without much human intervention and some are still in use. However, such methods do not prove to be efficient control for overall system. Data driven models in combination with adaptive and optimizing control schemes like MPC have proved beneficial in real time applications where simulation is a key factor. A general discussion about these methods along with future generation control philosophies in process control are presented in short. One such Machine Learning(ML) technique Long Short Term Memory (LSTM) as a control method is explained with example. This article aims to provide more inclusive considerations for further research.
Authors - Namratha M, Anusree Manoj K, Niha, Pooja Srinivasan, Arpana M Ramaswamy Abstract - IoT as a technology has gained a lot of prominence, especially in the healthcare sector which is more commonly known as IoMT. The Internet of Things (IoT) is a network of physical items that are provided with software, sensors, processing power, and other technologies and may communicate with other systems and devices over communication systems like the Internet. This work is mainly focused on the Internet of Medical Things (IoMT) that specifically focuses on healthcare and medical applications that consist of medical IoT devices. In these devices, as data travels through the Internet and is vulnerable to various security threats and shared among many systems, it can result in multiple anomalies. In cases where a patient’s physiological parameters like heart rate and temperature need to be constantly monitored, any anomaly in the readings from the sensors may lead to erroneous diagnosis and treatment. Hence, timely detection of such anomalies is important. Anomaly detection involves discovering patterns in data that contradict normal behaviour. This work uses Federated Learning to detect anomalies in medical IoT devices like heart rate and temperature sensors. The data from the sensors of different patients is collected and trained locally on the edge devices. The results from these local trainings are aggregated in the central Federated Learning Model and the local model is enhanced. The Federated Learning method preserves data privacy and security as only the training results are shared and not the actual data.
Authors - Lakshitaa Sehgal, Sarthak Sood, Sanyam Sood, Anshal Aggarwal Abstract - Cardiovascular diseases (CVD) are widespread in the population and frequently result in fatalities. According to data from a recent poll, use of tobacco, high blood pressure, cholesterol, and obesity contribute to an increasing mortality rate. The need of the hour is to conduct research on the variances of these factors and how they affect CVD. With the use of machine learning and artificial intelligence, this comprehensive study can be completed as their extensive methodologies would help in the prediction or detection of Cardiovascular Disease and discover their patterns in the vast amount of data. The predictive results might help doctors and clinicians in early detection of CVD in patients and might save many lives. After preprocessing the dataset, the classification, machine learning, data mining, hybrid and deep learning models used to predict cardiovascular risk are compared and reported in this paper. This paper compares and summarizes the performance accuracies of several models. The hybrid model with Light Gradient Boosting Machine (LGBM) and Artificial Neural Network (ANN) has an accuracy of 89.46% and is the best model.
Authors - Vaishnavi Vinay, Allen Mathew, Amala Siby Abstract - A decision-making process backed by the integration and evaluation of an organization's data resources is referred to as business intelligence. Since information has been recognised as a business's most valuable asset, it is a crucial resource for its growth and plays an increasingly important role in a variety of organisation kinds. This research article examines the history of business intelligence technologies, its relevance in current times and all the future developments that seem possible. Organizations are transforming into various approaches based on information and networking in the 21st century in response to a chaotic and ambiguous environment marked by hazy organisational boundaries and rapid change. Knowledge-based assets become apparent to be the core of long-term strategic edge and the cornerstone of success in the twenty-first century in such situations. The primary characteristics of business intelligence are determined by data analysis, processing, and visualisation. Relational tables are used by business intelligence technologies to store and display a lot of organised and unstructured data. They utilise specialised tools and mathematics to produce intricate visual reports. This research has been aggravated to focus on upcoming strategic revolution in market with numerous cutting-edge business intelligence technologies.
Authors - Karan R. Jayan, Sandhya Harikumar Abstract - The Unified Medical Language System (UMLS) is a collection containing data and software that bring together different health and biomedical vocabularies and guidelines to facilitate computer system compatibility. The UMLS is a very privileged dataset specifically accessible to those with a special license. It has a complex semantic network covering vast topics and contains details on each of the topics and subtopics. By visualizing this complicated dataset, we can understand each of the hidden relationships separately and the role of each of them in the entire network. This work proposes modelling UMLS using Knowledge Graph for efficient storage and retrieval in Graph database, Neo4j. Further, good visualizations of various information obtained from the data are also demonstrated. Centrality and community detection algorithms are used for getting insights and better analysis from this complicated dataset. Various queries are formulated to extract relevant information that would help the stake holders to better understand the data and diagnose diseases. Queries on almost all the semantic types are processed and information is retrieved.
Authors - Govind Suryawanshi, Suresh N. Mali, Pramod Patil Abstract - Today’s world is becoming computer fanatic. Thousands of images uploaded to World Wide Web in very seconds. A survey of various security and privacy threats that focus on use of social networking sites and it's ascertained that multimedia system content thread is one in all this security issue. Most of covert communication can be done through data hiding techniques like steganography in which secret data will be covered under carrier images. It can bear out by different Steganographic techniques of frequency domain or spatial domain. During data embedding process in digital images statistical properties of images get modified. By tracking those changes the analyzer can reveal the secret communication. In this paper, the study of the different Steganography techniques is observed along with their artifacts that left after data hiding and proposed a multiclass discriminator for blind steganalysis along with novel features determination approach that add advantage for discrimination of stego and cover images with 98.2% accuracy.
Authors - Ishita Goyal, Y. Madhavi Abstract - It is one of the cumbersome tasks to extract patient’s relevant information from cluster of web documents and traditional hospital systems and presenting those search results to users in meaningful form. This is done by carrying out traditional approaches that help in predicting infections and diseases including heart attack and many more. But the saddest part is that these traditional methods failed to tell about early signs of human health system. The solution to this problem lies in employing agent based integration approach with the help of ontologies. The paper deals with ontological based agents data integration proposed system in which external data related to human health (unstructured or structured) is coordinated and semantically integrated by multiple agents. After integration of data, new knowledge is inferred on basis of set of rules thus making it multi agent decision support system. The proposed model is able to predict health status on the basis of the activities performed by humans. The approach presented in the paper is detailed in context of human day to day activities which helps in enhancing cardiovascular soundness of humans.
Authors - Hugo Arias-Flores, Marco Solis, Mireya Zapata Abstract - This research aims to find gamified tools that can be used by teachers in their classes and transform traditional evaluation, so that it is attractive to the student and their mathematical skills are developed. The objective of the research was to empirically analyze whether the use of gamification has a positive impact on Mathematics Evaluation, through the application of a quasi-experimental pro-gram for learning basic operations in fourth-year students of Basic General Education. We worked with an experimental group and a control group, under the hypothesis that there is a significant difference in the results of the evaluation of basic operations in mathematics when using a traditional instrument and another online. The designed instrument consists of 20 multiple-choice questions, extracted from the PISA model evaluates for the fourth year. The experimental group was evaluated with the instrument in Word and the online instrument was used with the control group. The results of the evaluation of the experimental group have an average of 16.15, after the intervention, while the control group has an average of 14.20; showing that there is a difference in the evaluations when using a traditional instrument and another online. It is concluded that these new ways of evaluating and learning through gamified modalities and resources open new spaces to investigate different options on how to value knowledge to future generations, regardless of obstacles of time, space, and resources.
Authors - Geeta Babusingh, R.L. Raibagkar Abstract - Images of the outdoor scene appear hazy due to degradation occurred by atmospheric particles (water droplets, dust, etc.) while capturing the image. Dehazing such images is desired in digital photography and computer vision. Hazing creates lots of problem in the areas of surveillance, tracking and navigation and other applications. Thus, to remove it from an image, various defogging methods (single image and multiple images) have been proposed the in literature. A single image dehazing approach using haze veil analysis is proposed. Adaptive contrast stretching and Contrast limited histogram equalization (CLAHE) are applied to enhance the visibility level of de-fogged image. The results are found to better for CLAHE. The proposed system gave better results in terms of preserving the finer details and the color quality of the images.
Authors - Jahanvi Ojha, Shikha, Anupama Nayak, Varsha Goyal, Kavita Sharma, S.R.N. Reddy Abstract - In this technologically advanced world, technologies like the Internet of Things(IoT), along with modern wireless systems, led to the development of smart and intelligent tools to minimize the dependency of different categories of people like older people, physically disabled people, young children, etc. Cooking is a significant challenge for people, i.e., working people, students, elders, and disabled people. Thus, to enhance the life of the target people, various smart cooking products have been designed and developed in the literature. Smart automated cooker, defined in the literature, is designed and developed in a simulated environment and requires a mobile application for remote control operations. Hence, this article aims to study various smart kitchen products and propose a mobile application for the smart, automated cooker to remotely control the associated functions using wireless, cooking, and assistive technologies.
Authors - Rani Nandkishor Aher, Nandkishor Daulat Aher Abstract - Data compression is a common feature of current information processing and has a broad range of uses. Audio compression is critical in database applications such as storage and transmission. The difficulty of lowering the number of bits required to represent digital audio is addressed via audio compression. It is used to reduce redundancy by reducing irrelevant, redundant data. Recent developments in deep learning have motivated researchers to investigate challenges involving highly structured data utilizing unified deep network models. Due to the necessity for discrete representations that are difficult to train, the construction and design of such models for compressing audio signals have proven difficult. The purpose of this study is to concentrate on quantum neural networks for compressing audio files. As a result, a quantum based auto encoder capable of compressing audio data into a low-dimensional space is critical for achieving automatic audio compression and in the decoder part decompress the quantum audio signals by using deconvolution layers. Finally, the quantum representation of audio signals retrieves the original signals.
Authors - Riyazahemed A Jamadar, Anoop Sharma, Kishor Wagh Abstract - Recent developments in deep learning have provided a broad spectrum of algorithms and tools for precision agriculture. It is evident from the research work hitherto carried out in this domain, that the deep learning based system out-performs when trained with large size datasets. As the public datasets available for this domain are relatively smaller, the deep learning models cannot be leveraged to its full capacity. To beat this issue, we are proposing a data augmentation technique that combines Generative Adversarial Networks(GANs) with Transfer Learning that substantially increases the size of datasets. The prosed technique uses GAN for generating synthetic images of pomegranate leaves, whereas through Transfer Learning fine tuning and reduced convergence time of the model are achieved. The dataset used for GAN is real-time images of pomegranate plant leaves and DenseNet is trained with combination of real pomegranate images and GAN generated synthetic images for Transfer Learning. The experimental results show the efficacy of the proposed work to higher levels when compared with standalone approaches.
Authors - Chandrani Singh, Sunil Khilari, Anchal Koshta Abstract - Sustainable agriculture aims to incorporate conservation of environment, and the growth of economy. Many problems in the agricultural sector are currently associated with environmental inequalities and need to be addressed in a rigorous manner. Few of the experiments that need to be conducted and prototypes that need to be built with the aid and utilization of sensor, connected components through internet and Unmanned-Aerial-Vehicle(UAV)help streamline agronomists, agricultural engineers, and farmers, the operations, and using robust data analytic’s gain effective insights into the fertility of crops for sustainable agriculture. The use cases presented in this study is a comparative analysis of monitoring a wide area of agricultural land using conventional ground based vehicles to Unmanned-Aerial-Vehicle (UAV). This paper will present case scenarios and an innovative solution to apply drone technology for making agriculture ecosystem more sustainable. Further researchers presented here an experimental setup of sensory Internet-of-Things(IoT), to showcase that the problems of sustainability can be addressed by usage of the new and innovative devices.
Authors - Kartik Kishore, S.N. Misra, Abhipsa Ray Abstract - Inflation targeting has been RBI’s prime concern since 2014. During 2006-2013, India witnessed average inflation of 9%, affecting bottom half of the population significantly. The Urjit Patel committee in 2014 suggested that RBI’s repo rate should be decided by an MPC (Monetary Policy Commit-tee) with economists and finance sector experts from outside, instead of the RBI governor, consider CPI (Consumer Price Index) as nominal anchor for inflation targeting rather than WPI (Wholesale Price Index) and contain inflation within a glide path of 4-6%. These recommendations were put into effect in 2015. The paper brings out relationship between repo rate and inflation from 20008 to 2013 and contrasts it with rates decided from 2014 to 2016. The changed arrangement shows there is a healthy correlation between repo rate, GDP growth, and CPI. The paper also explores how RBI has addressed economic challenges from 2017-2019 and the pandemic period of 2019 to 2021 and economic revival thereafter. It brings out how inflation forecast by MPC has often not kept pace with actual inflation and laments its failure not to note the dystopic effect of demonetization. It also notes the fine harmony between monetary and fiscal policy during the Covid period when low repo rates walked hand in hand with fiscal stimulus measures by the finance minister. As India recovers from exogenous shock of Covid19, the paper very strongly recommends that best global practices like keeping the Taylor rule as a template for repo rate determination should be kept in mind by RBI.
Authors - Adlene Ebenezer P, Manohar S, Sahaya Sakila V Abstract - Presently, understanding how Land Use and Land Cover (LU/LC) change affects the environment globally is essential. The issues in changes detection in LU/LC has been resolved and adopted widely. This study examines the changes in LU/LC in the Indian state of Tamil Nadu, Javadi Hills region. The USGS provides the Landsat 8 images which collects the data from two science instruments namely the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS), the collected image has 11 bands and merged into color images providing various details regarding the circumstances of the land cover. This study compares the approaches for classifying land use and land cover, including the Random Forest Classifier, Support Vector Machine, and Maximum Likelihood Classifier. and to evaluate the impact of particular classification methods on analyses of land-use structure in the study area. The land resources management and policy makers will be able to understand how to take action in protecting the environment by observing and analyzing the changes that have taken place in the study region. Support Vector Machine produce the best results for overall accuracy 90.23% and kappa value 0.84 and changes analysis for the span of four years is also analyzed.
Authors - Sumit Pandey, Srishti Sharma Abstract - For each new domain, researchers and technologists developing sentiment classification algorithms must gather and curate data. In particular, for domains where the features change over time, the work required to annotate these corpora is prohibitive. The idea of training classifiers on a limited set of domains before using them on further domains is explored in this study. One issue is that classifiers frequently perform poorly when the distributions of the training and testing sets of data are different. Another issue is that it is unclear how to choose domains that are suitable for training and testing other domains based on similarities. To train classifiers that generalise and still perform well on test sets that come from various distributions, the authors approach this challenge by making use of numerous, smaller training datasets. The authors suggest using current developments in meta-learning to learn stronger cross-domain embeddings that can generalise for the few-shot learning context to achieve this. This work investigates sentiment classification using few-shot learning. There are a variety of existing approaches to this problem, but recent advances in meta-learning have proven to be quite effective in the few-shot context. For few-shot learning, the authors suggest using MAML (Model Agnostic Meta-Learning) to quickly train better cross-domain embeddings and networks. Our final model performs at a cutting-edge level.
Authors - Arsh Goyal, Sujith K, Mamatha HR Abstract - Natural Language Generation has gained immense popularity in the field of Natural Language Processing. The use of deep learning algorithms for text generation has various applications across domains. One such application is the ability to synthesize poetry from a collection of poems written by a particular poet. In this paper, a stacked character level LSTM model has been proposed to generate Shakespeare-style sonnets. The model is trained on the complete set of sonnets written by Shakespeare and is able to generate a new sonnet. To judge the novelty of the generated sonnet, the cosine similarity between a generated sonnet and each sonnet in the corpus has been computed. The qualitative validation of the sonnet has been done by manual methods. The sonnets are judged based on fluency, coherency, meaning, and poeticness.
Authors - Prasanna H Sulibhavi, Suvarna G Kanakaraddi, Shantala Giraddi Abstract - Kidney stone disease has become one of the risks for humans all over the world and most of people at the initial stage do not notice stone formation in kidney as disease and it slowly damages the organ. Present estimation of people suffering from this disease approximately 30 million. Currently, detection of kidney stone is carried out physically by doctors on medical scanned images. But this procedure is cumbersome and instinctive as it relies on the physician. The domain of Artificial Intelligence (AI) within the last decade has experienced a rapid development and has attained power to simulate human-like thinking in various situations. When the Deep Neural Networks (DNNs) are trained with huge dataset and high computational resources it can bring out great outcomes. The different imaging techniques available for detecting kidney diseases are CT scanning, Ultrasound imaging and X-rays. But the most preferred method is computed tomography (CT) imaging because of its imaging capability with better spatial resolution and high-level contrast. Image segmentation can be referred as a crucial step in digital processing of image. The main objective of segmentation is to improve and modify the image representation into some form which is more vital and simpler to be analyzed. So, to get a detailed overview of existing segmentation techniques, we have carried-out a survey of kidney segmentation and classification techniques, their benefits, drawbacks and challenges available.
Authors - Bhavneet Singh, Ashpreet Kaur, Basanti Pal Nandi, Amita Jain, Devendra Kumar Tayal Abstract - This research on textual document categorization has experimented a novel method using Fast Fourier Transform. Topic Modelling is a much-explored area in automatic text categorization as there are huge number of text generation in the current era. In state-of-the art review it is noticed that machine learning, deep learning algorithms are extensively used in this field even Evolutionary algorithms are also can be found as an application over here but in less number. Method: In this research a new concept of Fast Fourier Transform is used for Text Classification. Power Spectrum involvement of Fast Fourier Transform has been used for the first time in this application. This Fast Fourier Transform and its power spectrum calculation takes very short time for its computation. Result: It has been observed that this research method is very suitable for any type of text especially for large text documents where the accuracies of deep learning methods are less than the proposed research. Moreover, the time-complexity of the algorithm is very less which can be suitably applicable to any large text in this age of big-data. Conclusion: The research come out with 8% increased accuracy on 20 News Group data than any other methods present in literature. As well as it is suitable for short one sentence text also that has been tested on BBC news data.
Authors - Kranthi Malla, Kalpana Petluri, Bhanu Prasad Mekala, Y. Sandeep Abstract - It is crucial for the clients to have a clear understanding of the interior design of their home, and the constructing company must confirm if the suggested design meets their expectations or not. But throughout the actual building process, there are no ideal ways to display the house's ultimate appearance. This solution creates a sense of really entering the house, which makes it easier for clients, constructors and interior designers to visualize them. Customers may also suggest realtime adjustments.
Authors - Aditi Sharma, D. Franklin Vinod Abstract - The skin of a human body serves as the strongest barrier for critical organs. It serves as a barrier to prevent injury to our interior organs. But this vital area of the body can experience such severe diseases brought on by fungi, viruses, or even dust. Numerous skin conditions affect millions of individuals worldwide. People endure a lot of suffering, from eczema to acne issues. A minor skin ailment, such as a boil, can occasionally worsen or even become infected, leading to serious health problems. Some skin conditions are so contagious that they can spread by a handshake or the use of a handkerchief. A thorough diagnosis can lead to a proper course of treatment, which can lessen the suffering of those who are ill and raise awareness. A bacterial, viral, or fungal infection called pneumonia targets the respiratory system. Lung air sacs become irritated and enlarged as a result of the illness. It causes lungs to become fluid and mucus-filled. In this study, convolutional neural networks are used to detect skin diseases using image classification and segmentation. There are two different models can be used for this research such as VGG16 and VGG19. Although, this article used single-layer CNN model to compare with existing models. The testbed made use of openly available data from Kaggle's Dermnet images. The dataset was built using 23 categories of diseases and 19500 images which can be segregated in two distinguish classes such as acne, melanoma, Eczema, Seborrheic Keratoses, Tinea, etc. Further, the dataset is segregated for acne and Eczema which is categorized as bacterial and fungal infection. The performance measures for the VGG16 results are accuracy (95.2%), specificity (95.8%), and sensitivity (93.7%). The VGG19 result has a 97.2% sensitivity, a 95.4% specificity, and a 94.6% accuracy. Sensitivity, specificity, and accuracy for CNN single-layer are 96.3%, 97.6%, and 96.5% respectively.
Authors - Nivedhitaa Ranganathan, Riya Puri, Stuti Patel, Sudhanva M., Sandesh B J Abstract - Crime has grown to become a preeminent issue in today’s society, creating a problem for the common man and governments. There are instances where the felon cannot be convicted due to insufficient evidence and sometimes the incident is not even reported. The current law enforcement agencies lack the manpower and technology to combat this. Here lies the incentive to create a system which once automated, can survey a crime-ridden area to capture footage and on the occurrence of a crime, notify the concerned authorities. The proposed model in this paper contains a drone or UAV which is programmed to monitor a locality or town. The camera on the drone captures the frames in which a crime has occurred and sends selected frames to the server (backend) or the CNN. Post training and testing the CNN model, its core competency is to detect whether a crime has occurred in the frames or images received in near real-time. The images from the UAV are also processed within the drone via the facial recognition module using OpenCV. This framework introduces the integration of a Tello Py DJI drone with a facial recognition module and a Dense-Net 121 convolutional neural network.
Authors - Hardikkumar S. Jayswal, Jitendra P. Chaudhari Abstract - The Agriculture Industry has a momentous role in the development of any country. India facing heavy losses in crop production because of the plant diseases like fungal, bacterial, and viral. Indian farmers are still using traditional techniques for farming and especially the detection of diseases in a plant. Indian farmers are using naked eye observation to check the health of the plant and because of that sometimes wrong detection and classification of diseases cause heavy losses in crop production. It may be prevented using plant disease detection and classification techniques. In this paper, we present the survey and implementation of different plant disease detection and classification approach using machine learning and spectroscopy.
Authors - Sanju Kaladharan, Dhanya M,Rejikumar G, Janeesha Puthanpurayil Abstract - COVID-19 has revolutionized the healthcare systems globally with the accelerated implementation of digital health systems. Digital innovation in patient care through various approaches like eHealth and mHealth have gained popularity and are considered effective for encountering challenges concerning healthcare accessibility. Along with the ability of digital health systems to adapt and evolve, a congruence of social, economic, and environmental sustainability is essential for any program to be "sustainable." In this paper, the factors associated with each form of sustainability of digital health systems are identified considering the case of the Indian digital health system, and a sustainability triangle framework for digital health systems is proposed. A blended financing model and frugal innovation strategy are proposed to enhance the economic sustainability of digital health systems. Fostering digital and health literacy and providing human assistance (e.g., through community health workers/Accredited Social Health Activists) for the vulnerable or aged population is recommended for social sustainability. Cleaner production of the ICT infrastructure, its scientific disposal, reducing the consumption of non-renewable energy, and fostering sustain-able consumption to avoid rebound effects are identified as feasible strategies for environmental sustainability.
Authors - S Abhishek, Mahima Chowdary Mannava, AJ Ananthapadmanabhan, Anjali T Abstract - Using a stethoscope for auscultation has proven crucial for making respiratory illness diagnoses in patients. Although auscultation is quick, easy, and affordable, it has inherent limitations. It was challenging to communicate the respiratory sounds since a traditional stethoscope could not record them. The use of artificial intelligence to analyse respiratory sounds was made possible, particularly by the recordable stethoscope. With the help of this android software, categorizing auscultation sounds will be easier and more precise.
Authors - Nayar Cuitlahuac Gutierrez Astudillo, Dinesh Bhagwan Hanchate, Arvind M. Jagtap Abstract - A Firefly Algorithm (FA) resolves a truss structure optimization conceptual design and its performance is compared against the Big-Ban Big-Crush, Genetic Programming, and a Natural Crossover Genetic Algorithm. Truss structural optimization is a hard problem, and in a professional context, it is done by optimizing the cross section of the element along with geometry, and topology of the truss. Most truss optimization methodologies focus on the optimization of geometry and cross sections, leaving a gap between a partial optimize, and fully optimize the layout truss structure. This gap is covered here with an FA that considers discrete and continuous parameters to optimize in parallel geometry, cross sections and topology of a bridge truss structure. Therefore, the strategies focus on finding a representation that can handle variable sizes of elements and is readable in all optimization dimensions, which relates to changing the quantity of nodes and bars in the truss structure. Another scheme of the methodology was to propose a constrain handling function in which the general solution is related to the performance of the members in the structure. To demonstrate the performance efficiency of the algorithm two comparison problems were solved one in size optimization and the other in layout optimization. It shows that the FA are fast and effective in finding optimal topologies and geometries in cases, as the ten-bar truss and a 70m span bridge truss structure. In the optimization process the FA proved to be effective in a complex variant of the bridge structure. The contributions are to establish the initial boundaries, parameters, and special operations to link speed of convergence and quality of the solutions in the run.
Authors - Yogesh N. Patil, Sanil S. Gandhi, Arvind W. Kiwelekar, Laxman D. Netak Abstract - Generation of quizzes or examination papers by using a question bank is an important activity supported by many E-learning platforms. This chapter presents an ontology to represent the knowledge associated with question items. The ontology is described through UML notations. The ontology presented in this chapter is practical and sufficient in expressing various properties related to question items. These properties capture the meta-data necessary for effectively describing question items. The ontology’s primary applications include designing question banks and generating question papers based on constraints such as graded topic coverage, broad question types, multiple difficulty levels, and rules.
Authors - Rajesh Kannan Megalingam, Aravind Prakash Senthil, Bharath Sasikumar, Sreekanth M Mohandas, Vijay Egumadiri Abstract - Many of the supermarkets have narrow spacing between each shelf which makes customers difficult to navigate in between them. Also during rush hours, most of the bill counters are fully occupied which makes people disinterested. The covid pandemic has also made people to avoid contact with others thus preferring online shopping. Unlike online shopping, in offline shopping, people have the advantage of physically interacting with the shopping items. This can be achieved by a smart trolley which is compact, high payload to weight ratio, and a billing machine integrated into it. This paper discusses design and analysis of electric motor powered compact trolley chassis. A space frame structure is used as the trolley design as it has a high strength to weight ratio. Various tube cross sections are compared to find which one has high stiffness to carry the load. Static structural analysis is carried out for different materials. Different materials are chosen based on Ashby charts. Parameter called material index or performance index is used to find the best material for the application. Different cross section of the tubes is also analyzed. Modal analysis is also carried out to avoid resonance. Based on the results it is found that carbon fibre is better suited for the application. But when the cost and manufacturing constraints are considered, aluminium profile tubes are the best option.
Authors - Nitin B Ghatage, Pramod D Patil Abstract - In most real-world time series applications, offline modelling and forecasting methods may become ineffective and non-optimal as predictive algorithms trained on past data gets outdated in forecasting future behaviours as data might go through concept drift and evolve and a rapid rate. For the model to be adaptive, detect changes in the data stream and be fast enough to incorporate the changes required in itself before the performance degrades drastically. A lightweight adaptive model is presented in this paper which can get better result and handle concept drift seamlessly.
Authors - Ria Tyagi, Vishad Mehta, Suresh Sankaranarayanan Abstract - The conventional way of identifying structural defects and cracks in high-rise buildings is very labor-intensive, time-consuming and requires a lot of equipment too. There has been work done in the past where machine learning and deep learning models were used for identification of cracks using image classification. This paper proposes a better way of identifying these cracks using drones and deep learning models. A custom made shallow convolutional neural network was implemented and compared against other pretrained models like VGG16, RESNET50, Xception, Inception, DenseNet on three different datasets based on several performance metrics. After analyzing which model performs the best in terms of higher accuracy, the CNN and VGG16 models were integrated with a drone to identify cracks. This would be beneficial in informing the inspection team on the development of cracks in the building walls and hence necessary actions can be taken.
Authors - Lubna Ansari Abstract - Nowadays, cloud computing is one of the prominent tools for offering on-demand services via the Internet. It's possible that cloud datacentres won't always be able to provide the services that are needed during peak use times for diverse applications in e-governance. So, planning tasks through various clouds in a decentralized way for e-governance has drawn a lot of interest in recent years. This study proposes ZDSCN work scheduling methods to enhance performance. By using conventional normalizing methods, including the decimal scaling and z-score employed in data mining. The best VMs are then allocated based on similarity while taking cost into account.
Authors - Rajesh Kannan Megalingam, Ragavendra B Maruthababu, Sreekanth M Mohandas, Sakthiprasad Kuttankulangara Manoharan, Anju Latha Nair S, Manaswini Motheram Abstract - This work proposes the idea of controlling a hyper-redundant robot in an uncomplicated method. Here, the robot used is a linkage of several Dynamixel servo actuators, which will control the motion of each joint in the robot body. As higher the degree of freedom, the higher the complexity of controlling the robot. Mathematical computation makes a major impact in the sense of controlling a robot, commonly in a higher degree of freedom robot. IoT plays a major role in controlling and manipulating objects in the physical world. It leads to the rapid growth of technologies that brings a better solution to control the robot remotely. As control is cloud computing, it has the advantage of rapid deployment and automatic software integration with robots. Much open-source firmware is available. In this paper, NODEMCU is used as a communication and controlling component for the robot, to show the effectiveness of implementing IoT in robot control. The used firmware should be Wi-Fi enabled and communicated through the mobile app. The different movements of the snake will be impended to observe the robot’s capability. As a result, the work proposed will conclude with an efficient way to control a complex robot in a demanding situation.
Authors - Sujatha.K, NPG. Bhavani, Prameeladevi Chillakuru, CH. Sarada Devi, N. Janaki, J Femila Roseline, D Ezhilarasan Abstract - A smart wearable is a fully integrated and networked system. It is implied by the term "wearable" that the support environment is either a person or an article of equipment. where one or more sensor and actuator nodes are located on the end user's side and may even be integrated into clothing. Nodes' ability to access a local server or cloud. They are equipped with motion sensors that keep an eye on your daily activities and sync them with computers and mobile devices. Health and fitness-related tracking data is included. According to the definition of "wearable," the support environment is either a person or a piece of clothes. The workers pay their employees according to attendance and set up a medical check-up facility for them based on the report produced by a wearable device. Employers can follow their employees' whereabouts whether they are present in the office or outside without the employee's consent by using this smart wearable device.
Authors - Abhipsa Ray, S. N. Misra Abstract - The Self Help Groups (SHGs) operating pan Odisha under the flagship of Mission Shakti, started in 2002, through microenterprise programs has devised an important mechanism for empowering women. The women in rural Odisha are constrained by several social obligations and taboos. The patriarchal family and social setup often deter these rural women to participate in microenterprise programs and other women empowerment initiatives of SHG’s. Apart from the in-house restrictions, the SHG’s organizational structure along with bureaucratic interventions also aggravates the problems for those who are within the group. The problem of marketability of their finished products and availing the microfinance benefits further discourages women to some extent. The current study identifies some of the challenges these women face and offers solutions by considering the following covariates: expanding educational opportunities for women, providing financial aid, providing market facilities for their finished products, creating self-employment programs, providing training through microenterprise pro-grams, subsidies, and new schemes, policies, and missions, organizing workshops, and conducting research. The positive indicators for member women of the groups have demonstrated tremendous changes in their economic independence and contribution towards family. But still it’s a long way to go to mitigate the gender disparity faced by these rural women. The study aims to analyse the role of SHG’s in the social and economic upliftment of women and suggest the way forward for “Mission Shakti ++” based on the scientific analysis using PSM score of the collected data.
Authors - Afrah Fathima S, Muqaddam Aaqil Sheriff, Jananee V Abstract - “There are wounds that never show on the body that are deeper and more hurtful than anything that bleeds”. Trauma is defined as "an emotional response to a horrible occurrence like an accident, rape, or natural disaster," by the American Psychological Association (APA). Their goal is to expand psychological science and knowledge. Psychology is being used to influence important societal issues for the better and preparing psychology as a discipline and a career for the future. The world is going through a phase which is not understood by everyone and there might be various reasons for it. Majorly trauma, stress and mental health are all interrelated and on how one leads to the another is a great deal of study. People might not understand how one leads to another as they might not have the absolute proof for it, and speaking of which data analysis comes into play. In this paper, three datasets have been used and analyzed to show that crime such as rape leads to suicidal thoughts and such thoughts become an inception to mental trauma. Three data sets have been analyzed by following the methods of cleansing the data, analyzing and visualization it using python libraries namely pandas, seaborn, numpy and matplotlib.
Authors - Priya Radhika Vudatha, Keerthana Chiravuri, Yaswanth Sai Chikkula, Y.Sandeep Abstract - Brain tumors are an uncontrolled expansion of brain cells. Brain Tumors are the most common cancer killers among both children and adults under an age of 40. Every year, over 12,000 people are diagnosed with brain tumors, in which 500 are children and young people. The functionality of brain is affected by the abnormal cells growth in the brain. Early-stage detection of brain tumor helps in providing better medications, can escalate the patient’s life span. A computer system that can help in locating and classifying tumors for the radiologists. MRI scans are the most useful thing in imaging medical diagnosis. In recent years, MRI has emerged as a critical tool in medical imaging. Many advanced techniques and strategies are being developed by researchers. The proposed study consists a deep learning architecture called the EfficientNet-B0 model that can automatically classify the brain tumors in MRI images.
Authors - Niharika Terli, Pavan Chintakayala, Venu Madhavi Angaluri, Suhasini Sodagudi Abstract - The Growth of electronic gadget usage has been rapidly increasing day by day. Around 80% of people are dependent on mobile devices for accessing, storing, and sharing information. This requires security which is critical to managing. In this regard, security is missing due to the presence of spam messages in mobility devices. It is important to detect spam messages because it contains unnecessary information and data leakage could be possible. It is used to develop two different solutions to this problem by using machine learning algorithms, for finding the Greek word we use Snowball Stemmer Algorithm, Porter Stemmer, and use some machine learning techniques like the TF_IDF algorithm to increase accuracy. To combine these two, we use a multinational Naive Bayes algorithm, a Support Vector Machine. A vectorized model is obtained then we can classify the messages. Upon completion of the project, the society can be secure from hackers, protect society against viruses in their devices, and also unwanted malicious behavior can be identified which will be affecting the social network users.
Authors - Sudhir B. Desai, Vaishali P. Bhosale, Ajit B. Kolekar Abstract - Augmented Reality (AR) is an effective immersive technology. We have applied it in preschool education to make it more engaging and interesting. We have developed a demo AR application for English alphabets to enhance the traditional learning. Augmented Reality (AR) makes use of smart phones. When user holds the smart phone on the alphabet in the book, 3D model relevant to it will pop up. It is bringing virtual 3D model in real environment. The scripts added to the models to interact with them. We have used Unity 3D game engine, vuforia and Unity Asset store for development of this AR Book. For scripting we have used C# as a programming language. We have implemented AR as marker-base application which can be further developed as marker-less AR.
Authors - Remya Kommadath, Makkitaya Swarna Nagraj, Debasis Maharana, Prakash Kotecha Abstract - The performance of the recently proposed metaheuristic techniques is mainly analyzed by solving the bound-constrained real parameter benchmark functions of the competitions conducted as part of the prestigious conference, Congress on Evolutionary Computation (CEC). The bounding strategies are integral part of the algorithms as it aids in determining the optimal solution in a feasible search space; however, they are not discussed as part of the algorithmic descriptions. The bounding strategy utilized for solving the benchmark functions is not necessarily beneficial in solving real-life optimization problems. This work studies the impact of switching from the innate bounding strategy to corner bounding strategy on results obtained from the two algorithms, namely the Improved Multi-Operator Differential Evolutionary (IMODE) algorithm and Adaptive Gaining Sharing Knowledge (AGSK) algorithm, the winners of the CEC 2020 competition on solving bound-constrained real parameter benchmark functions. The performance of these two techniques is analyzed in solving a real-life production planning with their innate bounding strategy, and the obtained results are not satisfactory. The default bounding strategies of the two algorithms are replaced with a strategy, and the performance of the resulting modified IMODE (M-IMODE) and modified AGSK (M-AGSK) on the production planning are improved significantly. The results for best fitness obtained from M-IMODE have enhanced in the range of 77.84% - 99.09% for Case 1-4 and 19.99% - 50.96% for Case 5-8. AGSK is unable to generate feasible solutions, while the values obtained by M-AGSK are more improved in the range of 36.77% - 67.5% than M-IMODE and in the range of 0.03% -15.29% than sTLBO.
Authors - Sri Krishna U, Keerthivasan S, Kilari Prudhvi, Radha Senthilkumar, Vijayalakshmi U Abstract - The probability of a pedestrian being hit by a self-driving car is very high because of the varying attributes of pedestrians on the road. In the proposed work an end to end framework has been developed which identifies five different parameters which are essential in situations when a pedestrian comes in front of the vehicle. The end to end framework has been created using multitask learning and common data-sets to obtain a single network to provide faster and accurate results which helps to make decisions in real time. The time taken to obtain the result is much faster when compared to the existing system. The results obtained are much better than the existing system. All the modules were analyzed and their advantages, limitations, use-case are explained in the proposed work.
Authors - Rashmi Dixit, D.P.Gandhmal Abstract - The RealDApp is made to give the government an effective means of managing and storing the land asset. It enables the government to move all of the data from the centralized registration and keep the land assets over the blockchain network, where they can be managed appropriately. The user is given the ability to buy or sell land assets via a web platform thanks to buying/selling protocols. It is essential for the government, which is in charge of keeping the land registration, to keep all the records safe and shield them from foreign cyber attacks and fraud. Blockchain technology can be used to do this. The user installs all of his relevant land assets on the blockchain network after registering and confirming his identity, protecting them by transferring submitting them to a decentralized registry The system now includes buying/selling protocols in addition to merely being a decentralized database. Users can use the buying/selling protocols on the blockchain network to buy new land or sell their existing land without the use of a third-party system. It is impossible for fraud to occur because all network transactions are permanently stored over the chain and cannot be changed. This entire process helps the government manage the land registry in a secure manner while enabling users to acquire or sell land assets in a transparent and risk-free manner.
Authors - Priyanka Mishra, Ganesan R Abstract - Blockchain was acquainted with the fundamental innovation with empowering digital currency exchanges among untrusted parties. Today, its potential transformative is contrasted with that of the World Wide Web. With distributed ledger technology, blockchain provides a distributed environment with transparency, integrity, and data security as its highlight features. A critical number of research and contextual analysis discoveries show numerous ventures are, as of now, investigating the different advantages of blockchain innovation. Analysis of how different businesses adopt blockchain could help to see how to take care of comparative sorts of issues in the development business. The vast majority of the flow looking into identified Blockchain Technology is concentrating on its application for cryptographic forms of money; for example, Bitcoin and just a predetermined number of research are focused on investigating the usage of Blockchain Technology in different conditions or divisions. Blockchain Technology is something beyond digital money, and it can have a few applications in government, fund and banking industry, bookkeeping, and Business Process Management. Accordingly, this paper endeavors to research and investigate its chances and difficulties in the use of blockchain technology.
Authors - Vittal Badami, Suvarna Kanakaraddi, Priyadarshini Kalwad Abstract - This research investigates the approaches for identifying and classifying plant leaf diseases from digital images using deep neural networks. While diseases can affect any part of a plant and occasionally go undetected, there are some physical characteristics or symptoms that the plants exhibit that can be quickly detected using vision. These diseases can be identified and classified by machine learning algorithms using a wide range of techniques like deep neural networks, regression analysis, colour analysis and so on but not all techniques are appropriate for identifying all types of plant leaf diseases. It might be difficult to determine which approach is best for particular type of disease detection. This paper mainly focuses on image classification based on deep neural networks, object detection based on thresholding and image transformation and severity quantification with different methods like colour analysis which include Histogram of Oriented Gradients (HOG) and spot analysis and leaf area measurement using open source tool and segmentation-based calculation of affected area. This detailed analysis will be helpful to researchers working on algorithm optimization for image classification, object detection and severity quantification in identifying plant leaf diseases.
Authors - Rama Krishna Peddarapu, Akula Nihal Reddy, Sri Soumya Pappula, Pemetcha Bhargava Datta Varma, Sara Likhith Kumar, Sonika Goud Yeada Abstract - Posting photographs online has become an essential aspect of consumer’s life in recent years because to the rapid development of social networking sites searching for the right caption for posting their picture could be troublesome work and few users spend hours together in search of a good caption. Thus, Image captioning gradually attracted the attention of many people and became an interesting and ardous task. Image captioning automatically generates captions ac-cording to the emotions observed in the user’s face. Our project aims to suggest captions for the picture by detecting the emotion in the user’s face to simplify their task. The proposed system uses a machine-learning classification algorithm to detect the emotion in user’s face when they post an image on Instagram. For example, if the user posts a happy picture then our Image Captioner generates all captions related to happiness. Through this, people don’t have to spend hours of their precious time searching for a suitable caption for their image.
Authors - Karishma Yadav, Smita Naval Abstract - Smart contracts are a leading light moment in blockchain technology 2.0 and are universally used in various applications. A smart contract is a digital agreement for financial transactions. A logical or syntactical error in the smart contract will breach the system’s security. However, smart contracts are not always secure, and numerous flaws exist and cannot be modified due to the immutability feature of smart contracts. The most critical problems for smart contracts are vulnerabilities, resulting in massive financial losses. As a result, extensive research should be conducted before installing a smart contract. Despite the fact that numerous methods for detecting vulnerabilities have been proposed in previous research, all of them have a high false positive and false negatives. To address these issues, Ethersolve, a method based on symbolic execution at the EVM bytecode level, was used to analyze the strategies. A static analysis method based on the symbolic implementation of the Ethereum operand stack to settle Ethereum bytecode jumps and establish an accurate control-flow graph (CFG) of the accumulated smart contracts. Here we analyze the CFG of smart contracts using Ethersolve tool. And then weigh the benefits and drawbacks of Ethersolve and present our findings.
Authors - Litty Koshy, Chithu K Abstract - Since cameras are so widely available, taking pictures has become more and more common. In order to gain more information, it is frequently necessary to enhance photographs, which are crucial in our daily lives as memories or sources of wealth of information. There are many tools accessible to enhance the quality of photos, however some of them are also widely used to alter images, leading to the dissemination of false information. This makes picture forgeries more severe and frequent, which is now a major cause of concern. To identify fake images, many conventional methods have been developed over time. Convolutional neural networks (CNNs) have drawn a lot of interest recently, and they have had an impact on the area of picture fraud detection as well. However, the majority of CNN-based picture forgery detection methods currently used in the literature are restricted to identifying a certain kind of fraud (either image splicing or copy-move). As a result, a method that can quickly and precisely identify any hidden forgeries in a picture is needed. We present a powerful deep learning-based approach for detecting picture forgeries in this study. The RGB format images converted to error level analysis (ELA) is used to train our model. The suggested model is compact, and its effectiveness shows that it outperforms cutting-edge methods in terms of speed. The experiment's findings are promising, with a 91.37 percent total validation accuracy.
Authors - Kanchan Chavan, P. P. Vaidya, J. M. Nair Abstract - This paper describes design and construction of jitter measurement system for testing & calibration of nuclear timing spectroscopy system. The proposed and implemented jitter measurement system offers resolution of picoseconds (𝑝𝑠) for the measurement of jitter present in the timing spectroscopy system. The system makes use of two tracking ADCs (AD7980) with ramp. ADCs have 16-bit resolution which provides jitter measurement capability of few tens of 𝑝𝑠. This system can be upgraded for e.g. with 18-bit ADC (AD7960) with acquisition jitter less than 1 𝑝𝑠 which can be ideal for such applications. Such system can offer jitter measurement with 1 𝑝𝑠 resolution. This calibration system is based on very low jitter ADCs acquisition time (which can be as small as fraction of 𝑝𝑠) using modern ADCs. And the jitters of the individual blocks which are used in the jitter measurement can be measured by practical experiments and subsequently the scored spectrum corresponding to these jitters can be utilized for measurement of jitters of timing spectroscopy system under consideration using proper spectrum subtractions.
Authors - Remya Kommadath, Aman Kumar Saini, Prakash Kotecha Abstract - The study of evolutionary algorithms (EA) applications on benchmark problems has attracted researchers for the last couple of decades. Dedicated conferences (IEEE CEC, GECCO, PPSN) with particular emphasis on real-life applications have been organized to fill the theoretical and practical study gap of these optimization algorithms. Real-world problems provide an excellent plat-form for testing the ability of EAs because of their complex nature due to factors like conflicting constraints, noise, and specific business modelling. This work demonstrates the ability of five EAs- Adaptive Gaining-Sharing Knowledge with Improved Multi-Operative Differential Evolution Algorithm (APGSK-IMODE), and Multiple Adaptation Differential Evolution (MadDE), Sanitized Teaching-Learning based Optimization (sTLBO), Dynamic Neighborhood Learning-based Particle Swarm Optimizer (DNLPSO), and Artificial Bee Colony (ABC), to solve a job shop scheduling problem. The focus is on the two winners of the CEC 2021 competition on solving real parameter bound constraint benchmark test suite- APGSK-IMODE and MadDE. MadDE algorithm has been employed with a modification which enhanced the algorithm’s performance. It was observed that APGSK-IMODE gave the most satisfactory results among all five algorithms.
Authors - Ashwin Srinivasa Ramanujan, Ankith Boggaram, Aryan Sharma, Bharathi R, Aaptha Boggaram Abstract - Indian Sign Language is one of the most prevalent sign languages used in South Asia. Being a two-handed language, Indian Sign Language faces various problems when it comes to recognition and classification. The research on the interpretation of such two-handed sign languages is still in its early stages of growth. To explore the real-time translation of sign languages, the RtTSLC framework is proposed, which translates 26 finger-spelled English alphabet gestures in Indian Sign Language to their corresponding letters. The framework introduces a new technique that creates bounding boxes for two-handed sign languages, and as for classification, state-of-the-art deep learning architectures like VGG16, EfficientNet, and AlexNet have been used. The RtTSLC framework provided promising results, with a recognition accuracy of 92% with the VGG16 architecture, 99% with EfficientNetB0, and 89% with AlexNet in real-time. The results provide evidence that the real-time classification framework proposed is viable for recognition and will help the community of people who are hard of hearing.
Authors - Rania Alotaibi, Souham Meshoul Abstract - In recent years, the exponential growth of data has resulted in many challenges for machine learning tasks. The selection of important features from a large set of available features that maximize learning performance over the original feature set, save computation time, and generate more understandable models is one of the most difficult tasks. Therefore, a number of meta-heuristic strategies have been applied in many wrapper techniques to discard redundant and irrelevant features from highly-dimensional datasets. However, when using a classification model as the objective function within the optimizer, the current meta-heuristic-based techniques can exhibit a high time complexity due to the exploration of a vast search space and the need for multiple fitness or objective function evaluations especially in population based meta-heuristics. In this research work, we suggest a two-stage framework to perform feature selection for classification purpose by using data clustering and optimization methods. At the first stage, the proposed approach employs a genetic algorithm to select the best subset of characteristics using clustering validation as an objective function in-stead of classification evaluation as used in wrapper techniques. At the second stage, the best subset of features found is used for classification. The classifiers investigated in this study are support vector machine, decision trees, and random forest. We considered breast cancer classification using dataset from the UCI repository. The experimental study has shown very promising results in terms of reducing computing cost while obtaining comparable or even better classification performance when compared to other feature selection techniques such recursive feature elimination and other genetic-based wrapper techniques.
Authors - Preethi R, Saurabh Shrivastava, Lakshmy K.V. Abstract - Healthcare is one of the realms in which security plays an integral role in piling up the quality of an entity. The emergence of the medical field has laid a patch of immense opportunities in the path of advanced healthcare. Cloud storage has also obtained massive valuations in the medical world. We propose a framework that paves the way for storing and retrieving the data more securely, and also privacy has been maintained. In this paper, we proposed a protocol for the patient, doctor, and the hospital under the healthcare system that blows away the flaws and checks out the safety of the proposed work using Scyther - an automated security protocol verification tool. In the healthcare habitat, our outcome delivers an adequate means to form a medium skilled in setting up, registering, storing, retrieving, authenticating, and verifying electronic healthcare data to protect patients’ personal information.
Authors - Jeyavani M, Karuppasamy M Abstract - The most dangerous form of skin cancer that can appear anywhere on the body is melanoma. Melanoma symptoms include a mole larger than a pencil eraser, darker than surrounding skin, changes in color, size and shape, bleeds or itches, a sore that doesn’t heal, loss of breath, a cough that won’t go away and redness or a new swelling beyond the border of mole. Children and adults of various ages can be affected by melanoma. Men are more likely to develop melanoma, but women under 50 have greater fatality rates. Melanoma is currently highly unpredictable and could significantly influence people's life. Currently, a unique approach for Computer Aided Diagnosis (CAD) from Dermoscopy is used to detect the malignant aberrant tissue and rapid proliferation of skin cells. Malignant and non-cancerous melanoma cannot yet be distinguished with absolute certainty. With the new method, a computerized, non-invasive Dermoscopy system is introduced. Fuzzy C Means (FCM) clustering has been applied in this case to eliminate unimportant features, identify the central region among the datasets, and calculate the distance between each data point and the provided central points until clusters are produced. The varied classification capabilities of the K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) used for melanoma diagnosis of skin cancer have been demonstrated using straightforward supervised machine learning methods.
Authors - Jitesh Kumar S, Naveen Santhanam, Jaisakthi S M Abstract - Traffic sign recognition system plays a crucial role in vehicles that assures the safety of humankind. The system provides feedbacks on road information to the drivers on time which helps him in driving the car without accidents. Deep Neural Networks (DNN) are gaining more attention in the field of image analysis since it produces high accuracy rate. Recently many researchers are focusing on recognition of traffic sign using DNN to get better results. So, in this paper we have applied transfer learning technique to classify traffic signs. We choose to work with 2 architectures to classify road signs namely _ne tuned Convolutional Neural Network(CNN) and Resnet50. The two architectures were trained using the GTSRB - German Traffic Sign Recognition Benchmark downloaded from kaggle. With these models we have obtained a good result of 98% and 75% respectively.
Authors - Priya vij, Dalip Abstract - In today’s scenario data is growing very quickly; each connected device generates and transfers a significant amount of data online. Devices can't process or store this massive data because of their low processing and storage capabilities. Through the use of a cloud storage system, the researchers have suggested solutions. For latency-sensitive applications, it is crucial to investigate alternate storage optimization approaches due to the latency and bandwidth difficulties of cloud storage systems. In order to increase efficiency, storage optimization techniques like dimensionality reduction and deduplication may be deployed close to the system's edge.
Authors - Mallikarjuna Rao Gundavarapu, Raju Saginala, Madaka Anirudh Varma, Hemanth Jampani, Anjani Sreemanth Bodduluri, Lakshman Chowdary Moparthy Abstract - Liver cancer is a life threatening disease, with an estimated deaths of 2 million per year worldwide. Due to availability of abundant computational power recently the deep learning approaches have been explored in the medical fields. Earlier, some attempts were made using neural networks based solutions for tumor identification in medical images. However more computational effort is required for converting medical image format suitable for ANN processing. In this regard, in our paper we suggested a deep learning framework which can directly process CT images for detecting the tumor growth and also estimating associated risk. For our experimentation we have used LITS dataset for tumor detection and patient’s specific details for risk prediction. Various ML techniques have been explored to identify, most suitable one for complementing oncologists medical professionals in the treatment pro cess of this lethal disease.
Authors - Milind Rane, Dnyaneshwar Kanade, Vrinda Parkhi, Aditya Anasune, Aditya Suresh, Abhilasha Bande, Ishwari Baranjalekar, Swapnil Adhav Abstract - Facial Recognition possesses great importance in today’s tech-savvy world and is considered one of the prominent biometric replacements to the traditional techniques of PINs and Passwords. A stored database of images is exploited, using the image processing techniques available and feature extraction, identification, and classification is done using various algorithms. The techniques used for this whole process are based on machine learning because of its higher accuracy and better efficiency than other techniques available. Face recognition is accomplished using the sub-field of Deep learning i.e., Convolutional Neural Networks. It is typically a multilayer neural network of neurons, trained to perform discrete tasks using extraction and classification.
Authors - Milind Rane, Dnyaneshwar kanade, Omkar Sonone, Abhishek Suryawanshi, Prakash Solankar, Shreya Ghonmode Abstract - The present actions carried out within the home are carried on through the System for Home Automation and it is being developed rapidly these days, due largely to powerful process devices and Using a wireless sensor element net-work, an IoT-based system can achieve Home Automation using a wireless sensor element network Control. The development of a lighting control system is the main goal of this project. In fashionable homes, traditional switches are quickly being phased out in support of centralized management systems with remote-controlled switches. Ever thought of life wherever you'll just command your home appliances to figure as you would like simply by your voice. This project will control ordinary household electronic devices like fans, and lights, among other things. Using the web and your voice which is too low a budget. Today's great Internet of Things (IoT) devices can monitor their current state and share it with other objects on the Internet, allowing them to make intelligent decisions on their own. Humans are always looking for alternate ways to complete their tasks with the advancement of automation technology, life has become more effortless and simpler in all aspects. Now, automatic systems are more popular than manual systems. The manual systems don't seem to be a lot of acceptable to the new generation of people, thus it is intended to replace conventional procedures. We have a reportable and efficient implementation for the web of Things, which is used by sensing systems to monitor regular domestic conditions.
Authors - C. G. Patil, D. S. Deshpande Abstract - Sentiment analysis is a new trend in understanding people's emotions in a variety of situations in their daily lives. Social media data, which includes text data as well as emoticons, emojis, and other images, would be used throughout the process, including the analysis and classification processes. Many experiments were carried out in previous studies using Classification in Binary and Ternary, Multi-class classification, however, offers a more accurate also exact classification. Based on the polarity in Multi-class Classification, multiple sub-classes of the data would have been assigned to it. Machine learning in addition it is possible to utilize deep learning methods to the categorization progression. Social media may remain used to track otherwise analyse sentiment levels. This study examines the use of several artificial intelligence approaches in the analysing the sentiments of social media posts for the identification of trepidation or dejection. In the study, sentiment analysis utilising several AI algorithms was done using social media data from text, emojis, and emoticons. When doing sentiment analysis, the Multi Class Classification using Deep Learning algorithm demonstrates improved precision values.
Authors - Varinder Kaur, Amandeep Kaur Virk Abstract - The major purpose of technique of classifying the traffic in network is to identify various types of systems or traffic data. The analysis of received data packets is performed due to its necessity in communication networks nowadays. There are diverse phases to classify the network traffic such as to pre-process the data, extract the attributes and perform the classification. The dataset is utilized for the input in the classification stage. This paper studies diverse ML techniques in order to classify the network traffic.
Authors - Renuka Deshmukh, Srinivas Subbarao Pasumarti Abstract - Smart apps, a digital set-up, smart structures, and an innovative system are all necessary for automated data scheduling and analysis in a digitally advanced world with emerging technologies where the monitoring of human behavior and lifestyle is required. This study attempts to develop ground-breaking software and management frame-work taking into consideration the human element that supports arranging the upkeep of operational procedures, lowers training expenses, boosts manufacturing output, and establishes a virtual reality for -machine smart infrastructure design. The findings from six multination-al corporations show the potential for worldwide operational process standardization, which might eventually lead to the creation of intelligent systems with almost zero failure rates that can continually improve. For the purpose of enhancing man-machine communications, the innovative mechanism and solution suggested in this study offers recommendations for choosing the upcoming generation of smart manufacturing and innovative techno savvy structures, along with the corresponding smart sustenance and training.
Authors - S.Lohitha, S.Dwijesh Reddy, B.Revanth Krishna, N. Satya Krishna Abstract - The news is the most crucial resource for the general population to learn about what is occurring across the world. Even if newspapers remain a reliable source of news, social media is currently the next frontier in news. Regular individuals may simply alter the news to produce fake news since these social networks are so accessible. These fictitious news stories may be utilized for both political and commercial gain. It may be used as a vehicle to stir up neighborhood animosity, which is detrimental to society. In order to mitigate its impacts, it is crucial to recognize fake news. A platform that can validate and classify news is currently unavailable. In this essay, a technique is presented for figuring out whether or not news is reliable in the present. To train the features that were retrieved from the data using natural language processing techniques, this system makes use of ML classifiers including Decision Tree, Random Forest (RF), and Logistic Regression (LR). We evaluate each classifier's performance using a variety of parameters. The best classifier will provide the outcome for real-time news prediction.
Authors - Premalatha P, Poonguzhali. E, Rekha. R Abstract - Diabetes mellitus is named the most important developer source of any kind of major disease and if it’s not treated properly then the majority of it gives severe health-related issues and leads to death. This is considered one of the non-contagious diseases and it occurs when the sugar level within the blood increases above an average value. Therefore, regular calculation of blood glucose levels could be the condition to take care of sugar patients. Existing techniques for the determination of blood glucose levels are invasive methods that carry small single-use needles that are directly pricked into a finger to collect blood samples out of the body so it's passed to the fresh test tube for further processing chemically to see the number of sugar molecules present there. However, tolerating and reducing the patient pain and the use of strips of those techniques has led to the process of non-invasive techniques. The near-infrared LED and photodetector determine the sugar and insulin concentration. The attentiveness of glycemia within the blood sample is estimated by the level of infrared emission in the blood. The total concentration level of sugar molecules is on the LCD in an appropriate unit. To start the process, insert the fingertip in between the NIR-LED and NIR-PD. The reflected signal is fetched by the NIR-PD. The evaluation value obtained from it'll be filtered out and then amplified. The noise has been removed by FFT. The filtered signal and amplified to attach to an NPN transistor for getting the negative value and for early showing the evaluated value within the LCD. The signal has been connected to Arduino port A0. Finally, the output value is displayed on the LCD display.
Authors - Arun Jana, Devdulal Ghosh, Subhankar Mukherjee, Alokesh Ghosh, Amitava Akuli, Hena Ray, Nabarun Bhattacharyya Abstract - The major qualitative parameters to evaluate the quality of groundnut oil are including phospholipid content, fatty acid content, physicochemical attributes, metal content, etc. The focus of this study is the prediction of fatty acid content by the analysis of multi sensor signal generated from groundnut oil, which is a one of the most crucial parameter of quality evaluation in oil industries. It is usually performed by expensive methods like GC-MS and HPLC. These techniques are not suitable for regular basis analysis in continuous production line since it is costly instrument, skilled manpower is required to execute instrument and also very time consuming complex process to analyse the fatty acid content. In this article, the fatty acid content of groundnut oil was tested with developed customized multi sensor instrument, using pattern analysis as a key index. 8 nonspecific MOS sensors were chosen after due sensor selection procedure using correlation matrix and multiple linear regression model, in order to detect 8 common groundnut oil chemical components. The compounds are detected by multi-sensor array signature that expresses the odour pattern of the oil samples. Multivariate Data Analysis Approach, probability neural network was conducted for multi sensor data processing and prediction of fatty acid content of groundnut oil.
Authors - Abhilash SK, Venu Madhav Nookala, Karthik S, Bhargav Kumar Nammi Abstract - Pixel level semantic segmentation, or part segmentation is currently one of the widely researched and implemented techniques pertaining to segmentation tasks. While there are a considerable number of works in the literature involving humans, semantic part segmentation tasks involving only particular classes of rigid bodies like vehicles is an area of research that can be explored more. This area of vehicle part segmentation can support the problem of automated vehicle damage assessment to identify the damages or missing parts of a vehicle. In this work, a weakly semi-supervised part segmentation technique using a strategical online label refinement process is adopted for the task of segmenting car parts. A novel architecture is proposed which captures both the boundary information and the parsing information (semantic features) of car parts. This technique is experimented on a car parts dataset(DSMLR) and the model is evaluated using mean Intersection-over-Union(mIoU), Pixel Accuracy(PA) and mean Accuracy(mA) metrics. Various CNN Transformer architectures are used for the training and evaluation processes, and an extensive comparison is made across the same. Overall, an mIoU of 73.85 is achieved using CNN architectures, and 73.4 using Transformer architectures for this task, where the existing works in literature reported the mAP metric for the same.
Authors - Mridula Korde, Huzefa Essaji, Parth Bhalerao, Vivek Kaushik, Chandrakant Mohadikar, Anuj Sharma, Rahul Laddhad Abstract - Availability of key health parameters of human located remotely, to a trained medical professional can result in early diagnosis and prognosis of a patient leading to better healthcare and thereby reducing patient load in the Government hospitals. However, the testing of these key parameters requires sophisticated equipments and trained manpower which require heavy investment. As a result, these are not available in remote locations; consequently, remote lo-cation patients have to spend money to come to the city hospitals for further treatment. This research aims in proposing a solution towards making of a portable device, which is capable of measuring vital parameters of human body such as Body Temperature, ECG & PPG and further calculating other parameters like Heart Rate and Blood Pressure in a non-invasive way. Additionally, this study focuses on how the desired system can be designed in as low form factor as possible. This research discusses and demonstrates the prototype of the mentioned idea along with its performance analysis. Lastly, the paper throws light on future recommendations and how the system can be improved further results in terms of accuracy.
Authors - Shivani Saraf, Ram Kumar Bagaria, Harisudha Kuresan, Samiappan Dhanalakshmi Abstract - One of the essential needs of every living thing on earth is food. People are getting increasingly sensitive to their diets today, all around the world. The fight against obesity, weight gain, diabetes, etc., requires accurate methods for measuring food and energy intake. An innovative and practical solution that helps users/patients track their food consumption and collect dietary data might give us the most insight into long-term prevention and efficient treatment programs. In this post, we offer a calorie-measuring method that can help patients and medical professionals fight diseases caused by food. The user can snap a photo of the food and instantly determine how many calories were consumed thanks to our suggested method. We classify 80 high-resolution food photos into each class using deep convolutional neural networks to train the model and precisely identify the food components in the user's camera-taken image. We deployed Faster R-CNN algorithms to identify food items and label them appropriately.
Authors - N Praveena, N Gunavathi Abstract - This article presents a dual-mode Substrate Integrated Waveguide (SIW) filter by perturbing the square cavity for C band applications. The TE110 and TE120 perturbed modes are formed by introducing the centre metallic via in the SIW square cavity, which has distinct field distributions. The proposed filter is designed by combining the solution of CST supported by Machine learning algorithms. The four regression algorithms (XGboost, random forest, decision tree and K Nearest Neighbour (KNN) are compared and evaluated based on the accuracy score. Here KNN algorithm provides better accurate results of 89% for S11 and 99.96% for S21. The RTduroid 5880 substrate is used for fabricated filter of size 28.4 x 28.4 x 0.51mm3. The fabricated and simulated SIW filter results are validated with slight discrepancies.
Authors - Naveen John.J, I. Shatheesh Sam Abstract - In this digital environment, the most important domain that requires a cloud infrastructure is the medical sector, where hospitals hold vast amounts of data on patient health records. Since personal health records are stored on the cloud, the framework requires some form of encryption to keep the information safe. The proposed Modified Attribute-Based Encryption (MABE) data exchange method based on searchable encryption scheme uses the combination of Key Policy Attribute Based Encryption (KPABE) and Cipher Text Policy Attribute Based Encryption (CPABE) along with searchable attribute based encryption. Through the use of attributes, the patient's medical files are decrypted. Prior to encryption, eigen decomposition is applied to reduce the attribute set. The identities of the patients are employed as attributes in this model, which fortifies and safeguards the system besides being rigorously established under various conditions. In addition, as compared to standard data exchange methods, projected scheme requires not as much of rate, period, then storage. Access cost, time, and storage requirements for the proposed system are 2100 bits, 24 milliseconds, and 1212 bytes. It has a number of security features, such as tampering protection, visibility, and scalability.
Authors - Ahmad Hanif Asyhar, Fatmawati, Windarto, Dian Candra Rini Novitasari, Moh. Hafiyusholeh Abstract - Radicalism often triggers terror movements in Indonesia. The spread of radical ideology in Indonesia may support terrorism in the world. The clear picture of terrorism in Indonesia is crucial as the foundation for the government to take action against the radicalism. This study proposes the modelling of radicalism in Indonesia and the prevention of the dangers of radicalism by giving treatment. The model with the SERT compartment is used to describe the spread of radicalism in Indonesia. The SERT compartments are S= Susceptible, E= Extremist, R= Recruiters, and T= Treatment. This study uses the fifth Runge-Kutta as a numerical solution of the SERT model. The fifth Runge-Kutta has a higher level of accuracy compared to both third and fourth Runge-Kutta. The results of modelling and simulations showed that the treatment for radicalism has a fairly high success. This success rate is indicated by the reproduction number 𝑅0
Authors - Bidisha Biswas, Duroy Roy, Manotosh Biswas Abstract - Smart Computing and Communications needs high wireless data bandwidth as one of the essential ingredients. Microwave & millimeter wave antennas that can handle high data bandwidth are the gateways of high throughput ICT. One of the most sought after antennas for microwave & millimeter wave applications is the DRA, due to its lossless radiation characteristics. The bandwidth of the dielectric resonator antenna (DRA) is very low (nearly 3%). So, this type of antenna is not applicable where the broad bandwidth is required. In this article we have used DRA in stacked configuration for increasing the bandwidth. The dimension of the upper DRA is varied and dimension of the lower DRA is kept fixed. Our study shows near about 3% bandwidth offered by single DRA whereas, stacked DRA offered near about 22% bandwidth, and introduction of air gap further enhances the bandwidth to about 27%. In this paper we have also presented the variation of resonant frequency, bandwidth and input impedance with the variation of dimension of the DRA.
Authors - Pooja Manral, Seeja K.R Abstract - Emotions are very crucial in mental healthcare and taking decisions in real-time. Several researchers have worked on non-physiological signals which are speech, posture, and facial expression, but these are quite subjective and depend on various parameters which makes it difficult to identify the emotions of a person. Physiological signals like Electroencephalogram (EEG), give better results to identify emotions. In recent times with the continuous growth in emotion detection through Electroencephalogram (EEG) signals has increased. In this paper, a review of recent work done in emotion recognition through EEG signals is presented. The analysis of work is concentrated on signal pre-processing, extracting features, selecting appropriate features, and detecting of emotions using different approaches.
Authors - Lakshmi Prasanna Chalicham, Vineetha K.V Abstract - Face recognition is a type of biometric technique which is used to detect faces. Face recognition has gained more attention in the last few years. A mostly convolutional neural network is used for face recognition, proposed system focus on finding the similarity of faces using Multi-Task Cascaded Convolutional Neural Networks which are used for face detection and alignment. In this proposed system Bollywood actor’s dataset is used with a few more additional images. The similarity is checked using various angles of faces like front face, side face, spectacle face, and mask face images, and illumination for the images is performed.
Authors - Sanjay Hanji, Savita Hanji Abstract - Market segmentation allows companies to target right product and advertising to the right customers, thereby enhancing the performance of their marketing campaigns. Market segmentation process is aided by clustering algorithms. Mini Batch K-means (MBK) is an enhanced K-means clustering algorithm which has proved to be efficient in terms of computation speed and space utilization for large datasets, tested in many applications. However, extant literature has shown that it has quality issues with increase in number of clusters formed in large datasets. Therefore, the purpose of the present work is to assess the performance of Mini Batch K-means algorithm which is then compared to standard K-means algorithm using the performance parameters such as quality of clusters and computational speed for small four-wheeler market segmentation data set. The results revealed that Mini batch K-means cluster quality was affected by number of clusters whereas K-means was not much affected. However, for computational time assessment, Mini batch K-means was much slower than K-means for small dataset.
Authors - Thulasi Bikku, K P N V Satyasree Abstract - During software development and maintenance, predicting software bugs becomes essential. An essential activity of the quality assurance process, defect prediction at the beginning of the software development life cycle has received extensive research over the past two decades. Early detection of defective modules in software development can assist the development team in making efficient and effective use of the resources at hand to produce high-quality software in a short amount of time. Using a machine learning approach, which finds hidden patterns in software attributes, it is possible to recognise the problematic modules. In the NASA data set JM1, the suggested work is contrasted with a various machine learning classification procedure. The limit of random forests speculation is higher than that by other multi-class classifiers because of the effect of bagging and feature selection. Since ensemble learning of random forests requires a ton of decision trees to acquire elite execution, the situation not appropriate for carrying out the design on the limited scale equipment like embedded system. In this paper, we propose a boosted random forest, experimental outcomes show that the proposed technique, which comprises of less decision trees, has higher speculation capacity contrasting with the traditional technique. The experimental findings demonstrated that our proposed boosted random forest model results in greater levels of defect prediction accuracy, improving software quality.
Authors - Sanjana S, Vinayak R Pillai, Yadhu Krishna M, Remya S Abstract - Web applications face many protection issues. Cross-site scripting (XSS) is one of the most serious security threats that web users have to deal with. The risk of XSS attacks is growing as more related devices engage various Web Applications for various tasks. Hackers can acquire victims’ gathering data or additional delicate information by misusing XSS exposures in Web applications. Most critical systems today depend these apps, including those in health management, investment, and even emergency response. They must, accordingly, incorporate, apart from the bulged value they supply to their consumers, trustworthy security procedures. In this work, we study various cross-site scripting threats against web applications.
Authors - Sakshi Singh Rajput, Ashema Hasti, Harpreet Kaur, Lokesh Kumar Gupta Abstract - This paper presents and elaborates about a mobile application being developed for a Higher Education Institution with the aim to get timely notifications of events, without missing the deadlines and receive job/internship opportunities on time floated by the Placement team. This application has features for notifying and registering students for any upcoming college event and also makes it possible for the alumni to register themselves in the app so that their work/career profiles can be recorded. Both alumni and Placement Team can add any new internship and placement opportunity, visible to other users only after the admin validates it. There are many technologies available for developing any app that is cross-platform compatible; one such technology is flutter framework. Flutter is an open-source UI software development kit (SDK) used to develop cross platform applications for android, iOS, Linux, macOS, Windows etc. We used flutter SDK for the frontend, NodeJS for backend and MongoDB for database. A prototype is presented in this paper showing the main features of the application. The results of a preliminary study with the application are positive and reveal that students are excited to use the application and they will use the application on a regular basis when it be-comes available.
Authors - Prema Sahane, Sandhya Shelke, Ketan Urkudkar, Rutuja Dhokane, Omkar Dhawale Abstract - Spoofing detection has been a significant concern since the first 21st century due to tremendous electronic transfer accesses like financial transitions and fraud via online services. The spoofing websites look like the same sort as legitimate websites and can ask the user to enter their direction. Among several detection tactics, including information-based approaches, uniform resource locator (URL)-based detection is frequently utilized not only for its similar accuracy but also for its flexibility to a wide range of data types (for example, embedding URLs in spam messages or emails) Several machine learning algorithms may be used to increase forecast accuracy. To improve forecast accuracy, we can employ several machine learning methods. In our technique, we use just the information about the URL of a web webpage to see regardless of whether the site could be a spoofing site. Along these lines, there's no need to genuinely visit a web webpage to work out regardless of whether it's spoofing. This additionally permits the client to not visit the spoofing sites and open themselves to vindictive codes that it will convey.
Authors - Mangal Patil, Jyoti Morbale, Anuradha Nigade, Saloni, Padma Priya Abstract - Each area of the continent is currently experiencing a global health disaster caused by Corona Virus (Covid-19). Due to this, there is a critical need to adopt preventive measures to effectively combat this infection. Using a face mask is an effective way to protect everyone in public. When there are large crowds at public places, it is very difficult to monitor a person who is with a mask or without mask using physical checking. Hence developing a solution for detecting face masks is a challenging task. This paper, deals with the implementation of a real time face mask recognition methodology. Using Keras TensorFlow, MobileNetV2 and OpenCv. In this proposed approach, a two-phase system is used to train our model on different datasets from KAGGLE, GITHUB and our own real-world database consisting of people with and without mask. By com-paring our approach with other existing approaches, we find that it has the highest validation accuracy of 99.35%. The performance analysis of presented results also shows that MobileNetV2 is the best approach to detect a face mask with high accuracy.
Authors - K Rajeshwar Abstract - Government services are made accessible to the public in an easy, efficient and transparent way via the e-governance process. Centre's e-governance practices were being implemented for a long time ago and almost all the states and union territories have also begun to implement e-governance measures to monitor and oversee different government activities. The Government of India encourages e-governance at various levels offering prizes for the best e-governance practices. In reality, the administration of e-governance is effectively focused on the blocks and districts. But, because of the lack of resources and infra-structural bottlenecks, grassroots institutions could not implement e-governance to the maximum extent of public reach. Under these circumstances a research study is conducted to know the willingness and support to the eGovernance implementation at gross root level by the elected representatives, panchayat secretaries in two states viz., Kerala and Madhya Pradesh. Through this study, it is intended to know the willingness and support getting from the officials to rollout and build the eGovernance system at GP level in two states. It is evident that in selected states that the Panchayat Raj Institutions (PRI)s are completely satisfied in enabling the eGovernance system at GP level in providing the eservices to the citizens.
Authors - Alaka Das Abstract - Technical people use structured Query Language (SQL) to retrieve information from relational databases; whereas novice users lack the expertise to use SQL. For these people, natural language interface to the database (NLIDB) systems is now a days being developed to make them deal with data comfortably. These NLIDB systems convert a query in natural language (NL) to its corresponding structured query language (SQL). This task is famously known as text-to-SQL task. Most of the work in this area consider this task as a semantic parsing problem or a variation of it and though research in this area is gradually converging to satisfactory outcomes, till date no widely accepted commercial product is available. This paper considers Text-to-SQL task as a machine translation problem and describes a model that uses an open-source neural machine translation toolkit OpenNMT for training and translation and a parallel corpus developed from the Spider dataset. The training accuracy of the model is 99.97% which no existing NLIDB system have achieved so far. This result proves that a more organized dataset may lead to cent percent training accuracy and the model may be used in certain real-life applications to ease human work load. The Python code for parallel corpora generation, training and testing will be available at https://github.com/Ali- Das/Text to SQL using OpenNMT with Spider site.
Authors - Athira B Menon, Devipriya, Jyothika S, Karthika Ratheesh, Sudev K U, Anjali T Abstract - In this era where technology is an inseparable trait the rates of various crimes have been materially escalating. It is upon logical solutions rather than upon the crime that should be dwelled on. Especially crimes related to human trafficking, sexual abuse against women there is no considerable development in ensuring the safety. With people being the main powerhouse, sting is a web application developed for the user’s safety where other users have significant roles. The users could report a crime by uploading images or by commenting then that particular location will be considered as a danger zone, other users in that particular location could help the one in trouble. Other existing apps rely on law enforcement bodies to retrieve the victim, here the users could lend a helping hand, also the feature of using maps helps other people to check whether the path they are traveling is safe or not. Sting stores cases reported and enables the users to choose their path. Hence sting acts as a safe travel guide as well as crime detection app.
Authors - Param Ahir, Mehul Parikh Abstract - In this paper, a critical analysis of recent trends and techniques for tissue segmentation of a pediatric brain Magnetic Resonance Imaging (MRI) is performed. A significant amount of research has recently been conducted in the field of medical image analysis. In medical image analysis, MRI is regarded as one of the primary imaging modalities. MRI is capable of providing a clear view of various organs in depth. Tissue segmentation is an important process in brain research, especially when the subjects are infants and defining normalcy is difficult. Our paper focuses on recent developments in this field. Deep learning technological advancements are assisting this field in moving forward with dataset availability via challenges such as iseg-2017 and iseg-2019. This paper compares methodology, dataset, and results obtained by various methods. The paper also discusses the limitations of all available approaches as well as possible future directions in the field.
Authors - Gk Abani Kumar Dash, Rakesh Kumar Godi, Chinmaya Kumar Nayak, Santosh Kumar Sahoo, Satyabrata Das Abstract - In today’s world due to extensive use of internet and its supporting devices has led humans to use cutting edge technologies in their lives. All IOT devices requires network to operate which is reliable and provides sufficient security. To adopt these technologies, we need wireless sensor networks which facilitates the IOT devices for intercommunication. As an IOT is a powerful device with cutting-edge technology and Wireless sensor networks (WSNs) provides limited network access but is extensively used in the world but integrating sensor nodes with WSNs allows user to access the devices from corner of the world. WSNs in the field of network technology is one of the greatest innovation. This WSN is inbuilt with enormous numbers of sensors nodes, battery operated, having core coupling unit, several storage capacity unit and wireless trans-receiver. WSN’s are majorly classified into 2 types of architectures like distributed and Hierarchical. WSN’s applications includes the detection of natural calamities like air pressure, temperatures and vibrations. This WSNs are majorly used in remote places & for military applications. In this paper we have done an extensive survey on various IOT devices and WSNs and to bridge the gap between IOT and WSNs.
Authors - Ambrish R, Amritha P.P, Lakshmy K.V Abstract - Encryption is a very important aspect in the world of information technology and its importance has only kept on increasing as every year passes and with every new attack. In this paper, we selected two image encryption algorithms to obtain image ciphers that are strong enough to resist image cryptanalysis. The encryption effect of both the algorithm in terms of information entropy, sensitivity analysis and correlation analysis make the scheme secure. The objective of this work is to attack the combination of permutation and chaotic based encryption algorithms. The deep learning model was trained with these two ciphers. Given the encrypted images to the learned model, we were able to predict the testing accuracy up to 95 percentage. The training accuracy was observed to be 99 percentage.
Authors - Shambhavi Mishra, Rajendra Kumar Dwivedi Abstract - Today, with the current shift being witnessed from ways of traditional marketing to spreading the product reach through social media, it becomes necessary to find the most suitable strategy to opt for influence maximization. This decision has been challenging due to certain factors which the traditional algorithms did not consider. Dynamic nature of user behaviour, enlargement of network over time, missing most common scenario where user is part of more than one network and the role of common nodes are some of these challenges. Thus the solutions need to work in this direction as well. Many literatures have given algorithms and approaches keeping these considerations as well. In this paper we have discussed many such challenges and approaches that researchers have devised over time.
Authors - Vijander Singh, Ottar L. Osen, Robin T. Bye Abstract - Autonomy at sea relies on algorithms (often local) to make the decisions. One approach to create these algorithms are through the use of artificial intelligence. Numerous black-box machine learning-based algorithms are proposed for autonomous surface vessels (ASV) to make decisions like changing the speed or changing the route in order to reach the operational goal in an optimal way with respect to cost (fuel, time etc.) and safety (avoid collisions or dangerous situations). Hence, the algorithms must take into account many constraints and are influenced by several varying factors such as other vessels, weather etc. The objective of this paper is to propose a model that provides the reason behind the ASV’s decision when it is on a predefined path and change speed or route. Fuzzy logic used to record the expert knowledge based on COLREGs to steer vessel and take the decision during the collision course. Data has been captured based on expert knowledge and used to train an explainable model. The explainable model predicts the reason behind the decision. The focus of the paper is on local explainability instead on global decisions. The structured abstracts of the paper are: (1) Background: Several AI-enabled algorithms have been proposed for implementing autonomy to avoid the collision. These black-box techniques provide good predictions at the same time, they fail to explain the reason behind the decision, which make the model less trustworthy; (2) Methods: Expert knowledge (COLREGs) has been captured using fuzzy rules, and applied when ASV progresses, decision has been recorded. (3) Results: The explainable model provides the reason behind the action taken by the collision avoidance system; (4) Conclusion: A model has been proposed that explains the collision avoidance system to make it transparent and trustworthy.
Authors - O R S Rao, Rajkumar Abstract - Due to the unprecedented COVID-19 Pandemic scenario, students were forced to leave their universities all across the world and attend classes from their homes. The vast majority of Higher Education Institutions (HEI) made use of Educational Technologies (EdTech) to provide seamless classes. Though COVID-19 abated, most educational institutions continued a blended mode of teaching-learning. However, there was a notable difference in the levels of integration of EdTech and usage at the institutional and individual levels for both students and teachers. Considering the importance of the teachers in the teaching-learning process, this research concentrates on how teachers in HEIs are integrating EdTech in pedagogy. Management discipline was taken up for the study, as it is practice-oriented and hence is more amenable for deployment of EdTech in academic delivery. It employed the modified model of the UTAUT 2, by substituting Value Belief for Price Value, which is not relevant in educational institutions. 111 responses were collected from March to July 2022, from the teachers, handling classes for Undergraduate and Postgraduate students in the Management discipline from the Higher Education institutions, including Business Schools, in the Jharkhand State of India. Results showed that Teachers' Behavioural intention, which in turn has a favourable impact on actual usage, is significantly influenced by their value beliefs and habits. While the results are corroborated by some of the earlier studies, there is clear evidence of the impact of COVID-19 on the technology integration behaviour of teachers. As EdTech is increasingly becoming a key enabler for more effective academic delivery, this study helps in understanding the key aspects to motivate teachers in HEIs, including Business Schools, to integrate it into their teaching-learning processes, leading to blended education delivery, thereby improving the learning outcomes of the students, in a big way.
Authors - Tihomir Dovramadjiev, Diana Pavlova, Rusko Filchev, Dimo Dimov, Kalina Kavaldzhieva, Beata Mrugalska Abstract - Complete recovery of the human dental jaw is a challenge for dentists and dental technicians. The process is complex and requires proper treatment methodology, where emphasis is put on the patient, and application of modern technologies in design in terms of software and additive technologies. The biocompatible materials are also essential. The present study shows a complete concept for restoration of human dental jaw using titanium alloys Ti6Al4V and Ti-6Al-7Nb for dental implants, and zirconium ZrO2 material for the coronal visible part of the teeth. The research presents in details, an advanced work methodology, modern technological means and consideration of the human factor guaranteeing quality results and successful dental health care for the needs of complex clinical cases, such as restoration of complete dental jaws. The presented treatment process is of great importance for dental healthcare and all stakeholders.
Authors - R.Maruthi, P.Anusha, Srideivanai Nagarajan, K.Thiyagarajan Abstract - The satellite images usually suffered from the problem of dehazing of images due to hazy weather conditions. Haze and fog cause poor visibility of remote sensing images and it is difficult to interpret those images. Most computer vision systems use remote sensing images for interpretation. The hazy remotely sensed images cause serious errors during the interpretability of the images. There are various ways to solve this issue in order to improve the image details. This study explores some of the haze removal methods and estimated the performance of those methods using quantifiable measures. The experimental results are evaluated in terms of visual analysis and quantifiable measure.
Authors - Chanuka Dinuwan, Hiruni Amandakoon, Iroshan Aberathne, Tharindu Wimalarathna, Rashmi Ratnayake Abstract - The Application Programming Interface (API) has become the primary method for integrating different software systems. Malicious bots have recently become the main cyber security threat and have been used as an infra-structure to carry out almost any form of cyberattack. The existing methods, however, have been insufficient for solving this issue, specifically in APIs. This study introduces an automated malicious bot attack detection tool for APIs based on artificial intelligence and machine learning. The time series forecasting, and neural network were used to develop a model on the API log data. Without human interaction, the trained model recognizes malicious bot attempts in APIs and prevents them in real time. Moreover, the performance measures of the model are a key indicator to using this application as a real time bot detector and preventer in API.
Authors - Ashish Kumar, Nivedita Gupta, Monika Saini Abstract - The prime objective of current investigation is to optimize the availability of the generator used in steam power plants by utilizing various nature inspired algorithms. The decision variables associated with failure and repair rates of generator are considered as exponentially distributed and Markovian approach is deployed to develop the mathematical model. The Chapman-Kolmogorov difference equations developed and expression for system availability is derived. The genetic algorithms and particle swarm optimization techniques are employed to optimize the generator’s availability. Numerical results depicted that PSO performed better in predicting the optimum availability of the system. The results are helpful for system designers as well for maintenance engineers.
Authors - Ashish Lahase, Suvarnsing Bhable, Ratnadeep Deshmukh, Sunil Nimbhore Abstract - Speaker diarization is the process of determining "who spoke when?" in a piece of audio that has an ambiguous amount of speech from an ambiguous proportion of participants. At first, it was suggested as a study area for automatic speech recognition, where feature extraction and upstream processing step speaker diarization are both included. In addition, we examine open source diarization toolkits, approaches, free datasets, and a description of the ongoing advancement and use of the diarization system in this study.
Authors - Rathin Raj R S, G R Ramya Abstract - The portrayal of someone else’s original thoughts or work as one’s own without giving the author credit is known as plagiarism. A review revealed that the COVID epidemic, which nearly brought the world to an end, had a significant impact on the quality of academic work published. Of the 310 publications in infected journals that were examined, 41.6% were found to be plagiarized. Additionally, it was noted that technology was the cause of it. In order to compare the similarities of two articles while maintaining contextual value throughout and not just merely using words, this paper focuses on detecting plagiarism in content between two articles using Transformer models and Unsupervised algorithm for Community detection methodologies. This protects not just the presentation but also the ideas in the original content. The basic inputs to the proposed system are any two articles, into each of which the entire pipeline is fitted. The outputs are then used to determine the degree of plagiarism between the articles. Word embeddings are created using the BERT transformer model, and communities inside the embeddings are found using the Louvain Community discovery technique. A determination of plagiarism is made using the score of the number of communities existing in the two articles.
Authors - Sushma Oinam, Chandralika Chakraborty Abstract - As people began to be more aware of the environmental problems and technology being the indispensable part of our lives, we begin to adopt green technologies which are friendly to the environment. It aims to recycle, and reuse the tech products which will reduce the exploitation of resources in their production, control the landfill and e-waste resulting in reduce of overall energy consumption and also as a means of giving awareness to people about the green initiatives. After the 1987 Brundtland Report, Our Common Future, the term Sustainable development begin to be institutionalized with the 1992 UN Conference on Environment and Development (UNCED), also known as the Earth Summit. It refers to utilizing the resources in such a way it fulfills the basic needs of the present without having to worry about the future. Including environmental protection, its goal aims to generate balanced growth in the economy and inclusion of people in the changing phases of modernity. Technology unequivocally helps humans in various fields ranging from international communication to the generation of renewable resources. In terms of sustainability, it replaced the use of papers, a process can be optimized to certain steps. Monitoring carbon emissions can be done easily. However, high-tech software has serious impacts. The robots used for manufacturing, consume more energy and produce more e-waste. For remedy of all these, green technologies are adopted which helps in sustainable development. However, the challenges like control of diffuse emissions, the spillovers produced need a proper technological and organizational innovations. Even with all these reasons, green IT is the technology that is paving a way for sustainable development accompanied by the advances in coming decades.
Authors - Saadhikha Shree S, Adarsh JK, Umamaheswari E Abstract - More than a billion people in today’s world are estimated to suffer from some type of disability at some point in their life. Patients with Mobility Impairment face various challenging circumstances every day. Completely/Partially limited range of motion, delayed recovery, and the lack of availability of proper care and assistance all stand in the way of their proper functioning. Their limited functional ability puts them at risk of accidents, attack, theft, or violence of any kind when precautions are not carried out. It impacts their mental health and affects how they function in society. To solve these issues a robot that lets the user monitor/modify their environment and alert emergency contacts is designed. With the help of a Pi-cam, the robot also helps in surveillance of the house so that the user can stay in the same position and have an ’around the house view’. The received video feed is then subjected to object detection using computer vision. The robot and the user are connected through Wi-Fi and the necessary requests are made on the mobile application. Hence, this paper provides a way to solve the significant problems faced by the patients in the most affordable and efficient way.
Authors - Niranjan Kumar Mandal Abstract - Performance study has been carried out theoretically to find the application of PID controller to study the performance of a Robot Manipulator that can be used for welding of the parts of vehicles in an automobile industry. For this, a Block Diagram model of the whole system has been obtained. From the Block diagram model, transfer function and hence the overall gain of the system have been derived. Using MATLAB, the frequency responses of Magnitude and Phase angle of the system have been plotted. The ‘Gain Margin’ and ‘Phase Margin’ have been determined. Study has also been carried out to find errors in steady state of the system with the use of input signals such as, unit Step, unit Ramp and unit Parabolic. Results have been tabulated, shown graphically and explained.