Revolutionizing agricultural efficiency with advanced coconut harvesting automation

IJICTJOURNAL 0 views 10 slides Oct 22, 2025
Slide 1
Slide 1 of 10
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10

About This Presentation

The precision coconut harvesting system aims to develop an efficient system for accurately detecting coconuts in agricultural landscapes using advanced image processing techniques. Coconut cultivation is vital to many tropical economies and precise monitoring is essential for optimizing yield and re...


Slide Content

International Journal of Informatics and Communication Technology (IJ-ICT)
Vol. 13, No. 3, December 2024, pp. 537~546
ISSN: 2252-8776, DOI: 10.11591/ijict.v13i3.pp537-546  537

Journal homepage: http://ijict.iaescore.com
Revolutionizing agricultural efficiency with advanced
coconut harvesting automation


Yona Davincy R.
1
, Ebenezer Veemaraj
2
, E. Bijolin Edwin
1
, Stewart Kirubakaran S.
1
,
M. Roshni Thanka
2
, Dafny Neola J.
1
1
Division of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
2
Division of Data Science and Cyber Security, Karunya Institute of Technology and Sciences, Coimbatore, India


Article Info ABSTRACT
Article history:
Received Mar 5, 2024
Revised Aug 14, 2024
Accepted Aug 27, 2024

The precision coconut harvesting system aims to develop an efficient system
for accurately detecting coconuts in agricultural landscapes using advanced
image processing techniques. Coconut cultivation is vital to many tropical
economies and precise monitoring is essential for optimizing yield and
resource utilization. Traditional methods of coconut detection are labour-
intensive and time-consuming. The proposed computer vision-based
approach automates and enhances coconut detection by analyzing high-
resolution images of coconut plantations. Pre-processing techniques improve
image quality and object detection algorithms such as convolutional neural
networks (CNNs) identify coconut clusters. Challenges like lighting
variations and background clutter are addressed using feature extraction and
pattern recognition. A user-friendly interface visualizes detection results,
aiding farmers in timely decision-making. Extensive testing on diverse
datasets evaluates system effectiveness. This model aims to advance
precision agriculture, enhancing productivity and informing coconut
farmers’ decision-making processes. Using a CNN model, the accuracy of
coconut detection based on its ripeness was 98.8%.
Keywords:
Coconut detection
Computer vision
Convolutional neural networks
Feature extraction
Image processing
Machine learning
Object detection
This is an open access article under the CC BY-SA license.

Corresponding Author:
Ebenezer Veemaraj
Division of Data Science and Cyber Security, Karunya Institute of Technology and Sciences
Karunya Nagar, Coimbatore
Email: [email protected]


1. INTRODUCTION
The “Coconut harvesting using image processing” paper pioneers a transformative approach to
monitoring and managing coconut plantations through advanced image processing techniques [1].
This method explored a cutting-edge computer vision-based approach, analyzing high-resolution images of
coconut plantations using digital image processing algorithms [2]. While this work also has sophisticated
object detection algorithms like convolutional neural networks (CNNs) for precise coconut cluster
identification [3], followed by pre-processing techniques to enhance image quality [4], conventional methods
are often labour-intensive and time-consuming, prompting the need for automation and accuracy [5].
In tropical economies where coconut cultivation is vital, precise detection and counting of coconut clusters
are crucial for optimizing agricultural yield and resource use [6].
Challenges such as lighting variations and background clutter are addressed with feature extraction
and pattern recognition [7]. A user-friendly interface aids visualization and interpretation of results [8],
potentially providing real-time feedback to farmers and stakeholders for informed decision-making [9].
Rigorous testing on diverse datasets, considering different environmental conditions and plantation scales,
will evaluate paper efficacy using performance metrics like precision, recall, and F1-score [10]. Ultimately,

 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 13, No. 3, December 2024: 537-546
538
this paper aims to significantly contribute to precision agriculture, offering a reliable and technologically
advanced tool for coconut plantation monitoring [11]. Its outcomes have the potential to boost productivity,
minimize manual labour, and support informed decision-making in coconut cultivation, thus fostering
sustainable growth in tropical economies.


2. LITERATURE REVIEW
The integration of image processing has spurred a revolution in agricultural automation, with
applications ranging from crop monitoring to disease detection and ripeness assessment. Pradeep Mugithe’s
work highlights the pivotal role of image processing in real-time analysis of crop characteristics, offering
valuable insights into precision agriculture. The smart tender coconut harvester paper exemplifies this trend
by leveraging OpenCV for real-time image processing, positioning itself at the forefront of intelligent
agricultural decision-making. This shift towards visual data integration reflects a broader evolution in
precision agriculture, promising enhanced productivity and sustainability. In parallel, the Raspberry Pi has
emerged as a cornerstone in revolutionizing farming practices. Radhika Kamath’s research underscores its
adaptability and affordability, particularly in facilitating data acquisition and control systems crucial for
precision agriculture. The smart tender coconut harvester integrates the Raspberry Pi as a central processing
unit, enabling efficient data management and decision-making in coconut harvesting. Its cost-effectiveness
democratizes access to advanced computing solutions, driving technological advancements in agriculture.
Furthermore, advanced robotic systems offer solutions to labour-intensive challenges in agriculture.
Mr. Luiz Oliveira’s research focuses on developing robotic systems for harvesting and replicating human-like
gestures for precision and efficiency. The smart tender coconut harvester combines servo motors, a purpose-
designed robotic arm, and real-time image processing to make informed harvesting decisions, reducing
manual labour dependency and improving accuracy.
The integration of hardware and software is fundamental to the success of agricultural automation.
R. Eaton’s research highlights the challenges and opportunities associated with harmonizing diverse
technologies in precision agriculture. The smart tender coconut harvester exemplifies seamless collaboration
between physical and computational elements, enhancing efficiency and intelligence in coconut harvesting
practices. Wireless communication technologies play a crucial role in enhancing connectivity and data
transfer in precision agriculture systems. Chander Prakash’s studies explore their significance in facilitating
real-time data exchange, contributing to swift decision-making and coordination. In the context of the smart
tender coconut harvester, understanding the contributions of wireless communication is vital for optimizing
its intelligent and integrated system.
Moreover, user interfaces play a critical role in enhancing user experience and system usability in
agricultural automation papers. Studies focusing on designing intuitive interfaces offer valuable insights into
improving interaction and operation. For papers like the smart tender coconut harvester, intuitive interfaces
can enhance monitoring and control, further optimizing its performance. Additionally, testing methodologies
are essential for optimizing the performance of agricultural automation systems. Calibration techniques
ensure accuracy in image processing and motor control mechanisms, contributing to reliable and efficient
operations. Identifying best practices for conducting thorough testing ultimately enhances productivity and
sustainability in agriculture. The integration of cutting-edge technologies holds immense potential to
revolutionize agricultural practices. Papers like the smart tender coconut harvester showcase the intelligent
integration of hardware and software components, promising enhanced efficiency, productivity, and
sustainability in farming practices. As the agricultural sector continues to embrace technological
advancements, the journey towards smarter and more efficient farming practices is set to accelerate,
benefiting farmers, consumers, and the environment alike. Through the Implementation of this detection
method, farmers and stakeholders can receive real-time feedback on the condition of coconut trees.
This timely information allows for prompt intervention, enhancing the efficiency of the harvesting process.
Consequently, informed decision-making is facilitated. This study made testing across varied datasets and
conditions will assess the method’s efficacy. Performance metrics like precision, recall and F1-score will
evaluate its accuracy and effectiveness. This ensures the method’s reliability and robustness in different
scenarios. This work aims to make a significant impact on precision agriculture by providing reliable and
cutting-edge tools for monitoring coconut plantations.
This study made a quick assessment of food resources which are vital. This work presents a deep
learning method using mask R-CNN to detect and segment coconut trees in aerial images. The approach
proves effective in enhancing disaster response efforts [12]. Coconut trees are crucial for tropical regions and
islands. This paper presents a method of detecting and counting them using high-resolution satellite images.
The approach, which outperforms faster R-CNN, demonstrates effectiveness for large-scale detection and
counting [13]. This paper focuses on detecting coconut trees using high-resolution satellite imagery.

Int J Inf & Commun Technol ISSN: 2252-8776 

Revolutionizing agricultural efficiency with advanced coconut harvesting automation (Yona Davincy R.)
539
It employs a support vector machine classifier, finding the best parameters to optimize performance.
The study demonstrates effective accuracy, precision and recall with its approach [14]. Manual coconut
harvesting is risky and declining, prompting interest in autonomous solutions using machine vision.
This study introduces texture analysis and machine learning concepts to detect non-occluded and leaf-
occluded coconut clusters [15]. This paper used a histogram of oriented gradients (HOG) method for person
detection involving image matching and cell/block formation. To speed up processing, three methods-pixels,
cell, and block matching are developed. Block matching improves both speed and detection accuracy, while
pixel and cell matching primarily enhance speed [16]. This study leverages synthetic image data for training
deep-learning models in coconut harvesting automation. Using a two-stage bridged transfer framework and a
dataset-style inversion strategy, synthetic and real images are aligned to enhance model performance [17].
This study uses ensemble learning methods, combining CNNs to fix bugs across various
programming languages. It uses CNNs for better feature extraction [18]. Mapping tree species based on
canopy characteristics is challenging with high-resolution data. This work demonstrates an AI-based
semantic segmentation method using a CNN, specifically to identify coconut trees. It shows high accuracy in
mapping tree species and can be used for tree census [19]. Identifying coconut maturity is challenging, but
machine-based image processing can help. This article presents a method for determining coconut maturity
using synthetic data augmentation and deep learning algorithms, including CNN-based models and it
compares the performance using confusion matrices [20]. Coconut plantations are crucial yet threaten
biodiversity, making accurate monitoring essential. This work presents cocodet, a real-time detection method
using satellite imagery, which includes adaptive feature enhancement, a tree-shape region proposal network
and cross-scale fusion for improved accuracy [21]. This study proposed an enhanced segmentation of
coconut internal organs in CT images. This method addresses the challenges of coconut structure and boosts
accuracy [22]. This study explores using UAV-based hyperspectral imaging and photogrammetry for tree
detection and classification. It demonstrates high accuracy in identifying and classifying individual trees,
which could be applied to coconut harvesting automation [23]. This study introduces an advanced object
detection algorithm for remote sensing images, enhancing coconut harvesting automation. Integrated with
R-CNN, the approach shows strong performance in detecting coconuts amidst diverse backgrounds [24].
This study uses YOLOv4 and K-means clustering to detect coconut leaf disease and pests. The CNN-based
model achieves high accuracy in identifying detecting speed and precision [25]. This approach focuses on
filtering out faulty unreliable data by leveraging trust-based mechanisms and CNN algorithms, improving
both energy efficiency and accuracy in real-time image analysis [26].


3. RESEARCH METHOD
The proposed methodology for this system introduces an innovative fusion hardware of and
software to revolutionize coconut harvesting. Powered by the Raspberry Pi 4B and advanced image
processing algorithms like CNNs, this technology aims to automate the labour-intensive task of climbing and
harvesting coconuts.

3.1. Key components
3.1.1. Raspberry Pi 4B 4 GB
The setup of a Raspberry Pi 4B (4GB) for image processing involves downloading and installing the
latest Raspberry Pi OS on a microSD card. Peripherals such as a keyboard, mouse, and monitor are
connected, and the initial setup is completed upon powering up the Pi. Essential libraries for image
processing, like ‘python3-opencv’ and ‘python3-picamera’, are installed through the terminal. The Pi cam is
enabled, and the motor driver is connected to the GPIO pins, ensuring proper communication with
peripherals.

3.1.2. Pi Cam
The connection of the Pi Cam to the Raspberry Pi as shown in Figure 1 involves configuring it to
capture high-quality images and developing code to access and stream these images. The camera interface is
enabled in ‘raspi-config’ and the scripts are developed in Python to capture images or video streams, utilizing
the ‘picamera’ library for integration.

3.1.3. Servo motor (MG995) and blade
The connection of the servo motor to the Raspberry Pi involves writing code to control its angle for
precise positioning, followed by the implementation of a mechanism to trigger the servo motor based on
detection results, ensuring accurate response to inputs. A cutting blade is designed and attached to the servo
motor, ensuring it is sharp and suitable for cutting coconuts. The setup is tested to confirm the blade moves
accurately and safely under the servo’s control.

 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 13, No. 3, December 2024: 537-546
540
3.1.4. Motor driver (L298N), Bo motor, and Bo wheel
The connection of the L298N motor driver to the Raspberry Pi facilitates the control of Bo motors
for movement and steering. Code is written to manage motor speed, direction, and steering, ensuring precise
control over the motors. The Bo motors are connected to the L298N motor driver, with checks for proper
alignment and installation of the Bo wheels. The setup is tested to ensure smooth and responsive movement
of the motors and wheels.

3.1.5. Power source
Selecting an appropriate power source is crucial for the stable operation of connected components.
A power supply meeting the voltage and current requirements of the setup is chosen, considering battery or
external power options for mobility. Check the source for stable power delivery to avoid interruptions during
operation.

3.1.6. Laptop (for monitoring - if needed)
The establishment of communication between the Raspberry Pi and the laptop enables real-time
observation, if necessary, alongside the development of a monitoring interface on the laptop, enabling remote
control or adjustment of parameters.

3.2. Hardware setup
It involves a step-by-step process, including the connection of the Pi camera, servo motor,
DC motors with the L298N motor driver, and other components as shown in Figure 1. Steps include
connecting peripherals, mounting motors and the Raspberry Pi securely, connecting power supplies,
integrating the coconut detection mechanism, adding optional sensors for safety measures, managing wires,
testing the setup, and final deployment on a coconut tree as shown in Figure 2.




Figure 1. The block diagram of coconut harvesting
system

Figure 2. Connection of Cam, L298N, and Servo with
pi using fritzing


3.3. Proposed architecture
An automated coconut-cutting system has been developed. Figure 3 depicts the involvement of the
system in capturing images of coconuts, analyzing those images using artificial intelligence to determine the
optimal cutting position, and precisely controlling a robot to execute the cut. Safety measures, including
emergency stop buttons and collision detection sensors, are integrated into the system to prevent accidents.




Figure 3. The proposed architecture of the coconut harvesting system

Int J Inf & Commun Technol ISSN: 2252-8776 

Revolutionizing agricultural efficiency with advanced coconut harvesting automation (Yona Davincy R.)
541
3.4. Software programming setup
The basic work before starting the program is to install Raspberry OS in Raspberry Pi and basic
comments to update and upgrade the software. The software architecture encompasses various modules like
image processing, Raspberry Pi interface, decision-making, user interface (optional), integration,
communication, safety mechanisms, logging and monitoring, and power management. Each module serves
specific functions, such as utilizing OpenCV for image processing.
GPIO control for interfacing with hardware components, implementing coconut identification logic,
and ensuring safety features like emergency stop and collision detection. The given Figure 4 represents the
detection and labelling of the object. Overall, the proposed methodology integrates cutting-edge technology
to automate coconut harvesting efficiently, ensuring precise identification of ripe coconuts while prioritizing
safety and adaptability. When everything is over after the image training part the coding for servo mg995 and
L298N motor driver is shown in Figure 5.




Figure 4. Code used for object detection and labelling




Figure 5. Code used for servo and l298n driver


3.5. Equations
3.5.1. Grayscale conversion
Converting RGB images to grayscale involves calculating the luminance values using in (1).
This formula assigns different weights to the red, green, and blue channels based on their perceived intensity.
The result is a single-channel grayscale image that represents the brightness of the original colours.

?????? =0.299

??????+0.587

??????+0.114

?????? (1)

 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 13, No. 3, December 2024: 537-546
542
3.5.2. Threshold
Converting grayscale images to binary images involves applying a threshold value to each pixel.
If the pixel’s intensity ??????(�,�) is greater than the threshold ??????
�ℎ���ℎ??????��, it is set to 1 (white); otherwise, it is set
to 0 (black) as shown in (2). This process highlights significant features in the image by reducing it to two
colours, making it easier to analyze.

??????(�,�)={
1 �� ??????(�,�)> ??????
�ℎ���ℎ??????��
0 ??????�ℎ������
(2)

3.5.3. Image blurring
Applying a blur filter to an image involves convolving the image with a kernel of size k*k. Each
pixel in the blurred image is calculated using in (3) as the average of the pixel values within the kernel’s area
centered around it. This process smooths out the image by reducing high-frequency details and noise,
resulting in a softer appearance.

??????
??????������ (�,�)=(1??????⁄)∑=−??????2⁄
�2⁄
�
∑= −??????2⁄
�2⁄
�
??????(�+�,�+�) (3)

3.5.4. Edge detection (sobel operator)
The Sobel operator detects edges in an image using two convolution Kernels.

Gx = (-1 0 1 and Gy = (-1 -2 -1 -2 0 2 0 0 0-1 0 1) 1 2 1)

??????= √??????
�
2
+ ??????
�
2
(4)

By applying the Sobel Kernels, edges are detected by highlighting regions of high-intensity change.
The result is an image that emphasizes boundaries and transitions, making it easier to identify the structure
within the image by (4).

3.5.5. Image resizing
Image resizing involves adjusting an image to a new size M * N. Each pixel in the resized image is
mapped to a corresponding pixel in the original image based on the scaling factors 1/M and 1/N as shown in
(5). This process maintains the image’s proportion while altering its dimensions, which can help in analyzing
or displaying the image at different resolutions.

??????
�������(�

,�
′)
=??????(�

�⁄ ,�

�)⁄ (5)

3.6. Flowchart of the system
The following flowchart Figure 6 outlines the typical steps involved in training and testing an object
detection model using the PASCAL VOC dataset.




Figure 6. Flowchart of the system

Int J Inf & Commun Technol ISSN: 2252-8776 

Revolutionizing agricultural efficiency with advanced coconut harvesting automation (Yona Davincy R.)
543
To prepare the dataset by resizing images, normalizing pixel values, and extracting bounding box
annotations the initial work is to divide the dataset into two parts; one for training the model and the other for
evaluating its performance. To initialize the model architecture, choose a suitable object detection
architecture (e.g., faster R-CNN, YOLO, SSD) and initialize its parameters. Using the training set to train the
model by feeding it with images and their corresponding ground truth annotations (bounding boxes) and then
checking if the validation loss is acceptable. If the validation result is yes, proceed to the next step. If the
result is no, adjust hyperparameters (e.g., learning rate, batch size) and continue training. For the evaluation
of the model on the testing set, assess the performance of the trained model on the testing set by measuring
metrics such as precision, recall, and mean average precision (mAP). Finally, calculate metrics to quantify
the model’s performance, providing insights into its accuracy and generalization ability and terminate the
process. The Table 1 summarizes the predicted input data for a classification model, showing the counts of
true positives false positives, true negatives, and false negatives.
Overall performance refers to the general effectiveness or quality of a predictive model in making
accurate predictions across all classes or outcomes. It takes into account various metrics such as accuracy,
precision, recall, F1-score, and others to provide a comprehensive assessment of the model’s performance.
The Figure 7 represents the graph for overall performance.
Accuracy metrics measure the accuracy of a predictive model by comparing the number of correct
predictions to the total number of predictions made. It provides a simple and intuitive measure of the model’s
performance but may not be sufficient in cases of imbalanced datasets where certain classes are
underrepresented. Figure 8 represents a graph for accuracy metrics.


Table 1. Predicted input table
Classes Values
True positive (TP) 150
False positive (FP) 20
True negative (TN) 300
False negative (FN) 30




Figure 7. Overall performance




Figure 8. Graph of TPFN, FPTN, and accuracy metrics

 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 13, No. 3, December 2024: 537-546
544
4. RESULTS AND DISCUSSION
The positive outcomes observed in the results underscore the potential of the proposed coconut
harvesting system to revolutionize traditional agricultural practices. The automated processes, coupled with
safety features and adaptability, contribute to increased productivity, reduced labour dependence, and
enhanced safety in coconut harvesting operations. The successful integration of advanced technologies
positions the project as a noteworthy advancement in the domain of agricultural automation as shown in
Figure 9.




Figure 9. Complete images of the model after testing


This study examined automated coconut harvesting systems using image processing and advanced
hardware integration, addressing gaps in previous research. The integration of the servo motor control and
Raspberry Pi-based image processing in the proposed coconut harvesting system lays the foundation for an
innovative and automated solution. The successful execution of climbing and cutting actions, coupled with
obstacle avoidance capabilities, highlights the potential for addressing labour shortages and enhancing
efficiency in coconut harvesting. The harmonious coordination between hardware components and software
algorithms signifies a robust approach towards achieving the project’s objectives. Discussion and further
refinement of the system will contribute to realizing a fully autonomous coconut harvester, demonstrating
the feasibility of leveraging technology to revolutionize traditional agricultural practices.
The implementation and testing of the smart tender coconut harvester system in a simulated
coconut plantation environment yielded promising results. Leveraging advanced image processing
techniques and seamless hardware integration, the system demonstrated remarkable accuracy in detecting
coconut clusters amidst varying environmental conditions. Through the utilization of CNNs, the system
showcased a high level of precision in identifying coconuts based on distinct colour, shape and size
characteristics even in the presence of background clutter and fluctuations in lighting. The integration of
servo motor control facilitated precise positioning and cutting of ripe coconuts contributing to efficient
harvesting operations while minimizing collateral damage to surrounding vegetation. Notably, the safety
features including obstacle detection and emergency stop functionalities, ensured the safety of both users
and equipment during system operation. Real-time feedback provided through the user-friendly interface
enhanced user experience and facilitated informed decision-making for farmers and stakeholders. The smart
tender coconut harvester showed high precision in detecting coconut clusters and improved harvesting
efficiency, reflecting a significant advancement over traditional methods. Overall, the results underscore the
system’s effectiveness in automating coconut harvesting tasks, optimizing productivity and advancing
precision agriculture practices in tropical economies. Our results indicate that the combination of CNN-
based image processing with real-time feedback offers superior performance compared to older methods,
enhancing both efficiency and accuracy. While the study demonstrates the effective integration of CNNs
and robotic systems, further research is needed to assess scalability and reliability in diverse environmental
conditions. Future studies should be focused on optimizing CNN algorithms and hardware integration to
address specific challenges such as varying tree densities and environmental conditions.


5. CONCLUSION
The integration of CNN-based image processing with advanced robotic systems significantly
enhances coconut harvesting operations, marking a notable improvement in agricultural automation and
addressing previous limitations in traditional methods. This method exemplifies the impact of integrating
image processing and robotics in agriculture. By leveraging real-time image analysis and precise motor
control, it effectively addresses labour shortages and enhances harvesting efficiency. This advancement
reflects broader trends in precision agriculture, promising increased productivity and sustainability.

Int J Inf & Commun Technol ISSN: 2252-8776 

Revolutionizing agricultural efficiency with advanced coconut harvesting automation (Yona Davincy R.)
545
REFERENCES
[1] H. Liu, Z. Cao, and P. Yin, “A coconut tree detection algorithm based on deep learning,” in 2019 IEEE International Conference
on Multimedia and Expo (ICME), 2019, pp. 800–805.
[2] V. Kumar and A. Haider, “A survey of deep learning techniques for time-series forecasting,” 2024, pp. 414–420,
doi: 10.55524/csistw.2024.12.1.72.
[3] D. Han, F. Wang, and Y. Zhang, “Coconut tree detection based on object-oriented image analysis and deep learning,”
in Proceedings of the International Symposium on Remote Sensing, 2018, pp. 1–6.
[4] S. Zhang, L. Zhang, and Z. Yang, “Coconut tree crown segmentation and detection based on improved watershed algorithm,”
in 2016 IEEE International Conference on Information and Automation (ICIA), 2016, pp. 2195–2200.
[5] A. Chavan and S. Patil, “A review paper on various techniques for detection of coconut tree,” International Journal of Trend in
Scientific Research and Development, vol. 4, no. 6, pp. 377–381, 2020.
[6] L. Bo and X. Ren, “Fast detection of coconut trees in high-resolution remote sensing imagery using AdaBoost-based framework,”
ISPRS Journal of Photogrammetry and Remote Sensing, vol. 103, pp. 68–77, 2015.
[7] H. Zhai, “Research on image recognition based on deep learning technology,” in 2017 2nd International Conference on Image,
Vision and Computing (ICIVC), 2016, pp. 127–131, doi: 10.2991/amitp-16.2016.53.
[8] X. Yang and L. Wang, “Coconut tree detection and segmentation based on improved Mean Shift algorithm,” in 2018 3rd
International Conference on Computer Science and Engineering (ICCSE), 2018, pp. 1–4.
[9] Y. Jiang, Y. Ding, and W. Li, “Research on coconut tree detection method based on machine learning,” in 2021 3rd International
Conference on Computer Science and Software Engineering (CSASE), 2021, pp. 25–30.
[10] J. Li, J. Cheng, and W. Sun, “Automatic coconut tree detection and counting using UAV imagery,” Remote Sensing, vol. 11,
no. 17, 2019.
[11] S. Patil and A. Chavan, “Coconut tree detection and segmentation using convolutional neural networks,” International Journal of
Engineering Trends and Technology, vol. 66, no. 2, pp. 14–19, 2019.
[12] Y. Gao and J. Huang, “Coconut tree detection in aerial images using faster R-CNN,” International Journal of Remote Sensing,
vol. 41, no. 9, pp. 3329–3347, 2020.
[13] H. Wang and Q. Liu, “Coconut tree detection and localization in high-resolution satellite images based on deep learning,”
IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 4, pp. 545–549, 2017.
[14] A. Rao and B. Reddy, “Coconut tree detection in multispectral satellite imagery using support vector machines,” International
Journal of Applied Earth Observation and Geoinformation, vol. 68, pp. 14–22, 2018.
[15] C. Smith and M. Johnson, “Coconut cluster detection using texture analysis and machine learning,” Journal of Agricultural
Science, vol. 15, no. 3, pp. 88–95, 2019.
[16] R. Kumar and V. Sharma, “Coconut tree detection using histogram of oriented gradients (HOG) features,” Journal of Computer
Vision and Image Processing, vol. 14, no. 2, pp. 45–52, 2016.
[17] X. Chen and Y. Wang, “Coconut tree detection using transfer learning from synthetic data,” IEEE Transactions on Geoscience
and Remote Sensing, vol. 58, no. 8, pp. 5689–5704, 2020.
[18] D. Patel and S. Shah, “Coconut tree detection using ensemble learning methods,” Pattern Recognition Letters, vol. 112,
pp. 18–25, 2017.
[19] A. Gupta and R. Singh, “Coconut tree detection and localization in UAV images using semantic segmentation,” International
Journal of Remote Sensing Applications, vol. 24, no. 5, pp. 1123–1135, 2018.
[20] M. Li and Z. Chen, “Coconut tree detection using convolutional neural networks with synthetic data augmentation,” IEEE Journal
of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 6, pp. 1854–1867, 2019.
[21] Z. Wang, Q. Zhang, and H. Li, “Coconut tree detection in high-resolution satellite images based on deep learning and spectral-
spatial features fusion,” Remote Sensing, vol. 11, no. 10, p. 1206, 2019.
[22] C. Lee and S. Kim, “Coconut tree detection and segmentation using convolutional neural networks and region-based active
contour models,” IEEE Access, pp. 26541–26550, 2017.
[23] S. Sharma and A. Singh, “Coconut tree detection in UAV imagery using genetic algorithm-based feature selection and SVM
classifier,” International Journal of Remote Sensing Applications, vol. 25, no. 6, pp. 1437–1450, 2020.
[24] L. Ma and W. Zhang, “Coconut tree detection in remote sensing images based on multiscale convolutional neural networks,”
Journal of Computational Science, pp. 242–252, 2018.
[25] J. Kim and H. Park, “Coconut tree detection and counting using convolutional neural networks and k-means clustering,” Sensors,
vol. 19, no. 24, p. 5424, 2019.
[26] N. Karthik and V. Ebenezer, “Trust based data reduction in sensor driven smart environment,” EAI/Springer Innovations in
Communication and Computing, pp. 63–75, 2020, doi: 10.1007/978-3-030-34328-6_4.


BIOGRAPHIES OF AUTHORS


Yona Davincy R. pursuing a B.Tech. degree in Computer Science and
Engineering with Specialization in Artificial Intelligence 2020-2024 at Karunya Institute of
Technology and Sciences, Coimbatore, India. Her area of interest includes data science, data
analytics, and machine learning. She has done a mini project based on prediction in the name
of a Smart Health prediction system and a few projects in the field of Data Science and
Analytics. She has gained practical experience in a few internships like Python programming
at Cisco, machine learning fundamentals for business and data analytics at YBI, data
Analytics at IBM, MeriSKILL, Psyliq, and Intern Career. She can be contacted at email:
[email protected].

 ISSN: 2252-8776
Int J Inf & Commun Technol, Vol. 13, No. 3, December 2024: 537-546
546

Ebenezer Veemaraj received his B.Tech. degree in Information Technology and
M.E degree in Computer Science and Engineering from Anna University, Chennai in the
years 2009 and 2012. He also received his Ph.D. in Information and Communication
Engineering from Anna University, Chennai in the year 2020. He is currently working as an
Assistant professor in the Computer Science and Engineering department, Karunya Institute
of Technology and Sciences, Coimbatore Tamilnadu, India. He has published many research
papers in various International/National Conferences and Journals. His area of interest
includes the IoT, cloud computing, body area networks, data structures, and distributed
systems. He can be contacted at email: [email protected].


E. Bijolin Edwin is an assistant professor at Karunya Institute of Technology and
Sciences, Coimbatore, India. He holds a Ph.D. degree in the area of Cloud Computing from
Anna University, Chennai, India. He received his Master of Engineering from Anna
University, Chennai, India. His research interests include cloud computing and deep learning.
He is a lifetime member of the Computer Society of India. He can be contacted at email:
[email protected].


Stewart Kirubakaran S. received his doctor’s degree from Anna University,
Chennai. His research areas include cloud security, network security, machine learning, and
artificial intelligence. He has around 11+ years of experience in teaching and 1.2 years of
experience in the Industry as an SEO and PMO analyst. He has published 3 Indian Patents and
1 Australian Patent has been granted. He has 6 SCI publications, 17 Scopus Indexed
publications, 5 Book-Chapter Publications, 10 Non-Indexed publications, and presented
papers at various National and International Conferences. Also, he has attended more than
30+ workshops, seminars, and hands-on training in various disciplines. He is a lifetime
member of IAENG. He can be contacted at email: [email protected].


M. Roshni Thanka is presently being an Assistant Professor in the Department
of Data Science and Cyber Security, Karunya Institute of Technology and Sciences,
Coimbatore. She has received her Ph.D. degree in Cloud Computing from Anna University,
B.E. and M.E. degree from affiliated colleges of Anna University. Her research interest is
mainly based on cloud computing, artificial intelligence, and IoT. She has published papers in
reputed journals and also delivered guest lectures in FDPs. She is a lifetime member of
Computer Society of India. She can be contacted at email: [email protected].


Dafny Neola J. pursuing B.Tech. degree in Computer Science and Engineering
Specialized in Artificial Intelligence 2020-2024 from Karunya University, Coimbatore, India.
She has a profound enthusiasm for subjects related to machine learning and cyber security.
She has done a miniproject based on python language in name of YouTube vedio transcript
summarizer and a project in name of machine learning for detecting subtle signs of eye
disease. She had an experience in few internships like machine learning fundamentals for
business and data analytics by YBI, Python programming by Cisco, Cyber Security by Cisco,
data analytics by IBM and Python Programming by Emglitz Technologies. She can be
contacted at email: [email protected].