Journal of Nuclear Energy Science & Power Generation TechnologyISSN: 2325-9809

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Review Article, J Nucl Ene Sci Power Generat Technol Vol: 10 Issue: 9

Machine Learning Technique for the Assembly-Based Image Classification System

Ambuj Kumar Agarwal1*, D Angeline Ranjithamani 2, Pavithra M3, A Velayudham3, Anandaraj Shunmugam4 and Mohammed Ismail B5

1Chitkara Institute of Engineering and Technology, Chitkara University, Punjab, India

2Department of Computer Applications, Francis Xavier Engineering College, Vannarpet, Tirunelveli, Tamil Nadu

3Department of Computer Science and Engineering, Jansons Institute of Technology, Karumathampatti, Coimbatore

4Department of Computer Science and Engineering, Jansons Institute of Technology, Karumathampatti, Coimbatore, Tamil Nadu, India

5School of CS and IT, DMI St. John the Baptist university, Mangochi, The Republic of Malawi, Southeastern Africa

6Department of Information Technology, Kannur University, Mangattupramba Campus Kannur, Kerala

*Corresponding Author:Ambuj Kumar Agarwal
Chitkara University Institute of Engineering and Technology
Chitkara University, Punjab, India
E-mail: ambuj4u@ gmail.com

Received: August 31, 2021 Accepted : September 15, 2021 Published : September 22, 2021

Citation: Agarwal AK, Ranjithamani DA, Pavithra M, Velayudham A, Shunmugam A, et al. (2021) Machine Learning Technique for the Assembly-Based Image Classification System. J Nucl Ene Sci Power Generat Techno 10:9.

Abstract

Additive manufacturing, or 3D printing, is a vital innovation in the field production processes. Furthermore, the decision to change the filling without influencing the outside creates a different vulnerability for 3D printer technologies. This research includes a clause to identify fraudulent filling problems in the printed object: 1) look into malevolent faults in the 3D printing process, 2) remove outliers from modeled 3d printer method photos, and 3) perform an object detection test with one sample of the non-infill test set and another cluster of fault reinforced test set from the 3D printing process. Layer by layer, the photos are gathered from the isometric perspective of the program model display. The data extracted is provided to the developed algorithms, Naive Bayes method, and J48 Decision Trees. Among them, the Naive Bayes method shows a higher accuracy rate of 86%, and J48 Decision Trees show an accuracy of 96%.

Keywords: 3D printing; Machine learning; Image classification

Keywords

3D printing; Machine learning; Image classification

Introduction

The blueprint of future manufacturing, such as cyber manufacturing systems, delineates a vision beneficial in cost, efficiency, and sustainability. However, cyber-attacks could threaten complex safety-critical manufacturing systems. The consequence of the cyber-attack on manufacturing systems could defect parts, cost of poor quality, damaged equipment, and loss of customer loyalty or even create a custom safety issue.

The ability to influence interior levels without influencing the outside of 3D printer devices presents distinct risks [1]. These characteristics may result in the unintentional creation of maliciously faulty parts. Turner conducted an experiment in which 3D printer users were uninformed that STL files had been intentionally modified, resulting in the production of a harmful part [2]. These flaws could lead to a decrease in yield load, a decrease in straining at failing, a variation in natural frequency, and other issues that compromise quality control and operation. Poor printing performance difficulties like an insufficient filling, flaws in thin walls, irregular emission, layer separation and breaking, mattress fall, and other faults render most 3D printers components unsuitable and dangerous to use.

In production, a recognition system was used to find flaws such as welding faults, visually inspecting of metal particles, and defect diagnosis in the rolling process, among other things. Likewise, finding faults in the production process using smart automation screening can efficiently remove specific filling risks. Simultaneously time, it can detect a publishing problem early on, rather than waiting until the end of a time-consuming manufacturing process to diagnose and destroy the part, wasting time, resource, and operation cost.

Background

By putting a vacuum into an ASTM Standard D638-10 tensile test specimen and conducting the tensile test, Sturm et al. [1] were able to evaluate the impact of gaps on component durability. The results of the survey revealed that all of the void-containing samples cracked at the blank region, while another sample failed properly. The yielding pressure was decreased by 14% on average, and the stress at the breakdown was dropped from 10.4% to 5.8%. The project also defined several terms. In composite materials, there are various vulnerabilities. It also advocated software inspections, hashing, quality control, and worker training as ways to mitigate these risks. The suggested methodology of quality control consists of physical factors such as tool pin temperatures, which are derived implicitly from machine settings. In contrast to Sturm et al., this study also used processes. In comparison to Sturm's real application, this research adopted picture machine learning and categorization, and 2) this work applied a new technique using software models are formulated.

Vincent et al. [3] utilized measurement and control approaches to detect alterations in the inherent behavior of a produced item. This technology might be used with 3D printed parts, but it would necessitate adaptable systems for engineering structures. Thus it emphasized the necessity for innovative methodologies above existing quality assurance detection systems assaults. Unlike Vincent et al. This work can identify modifications in 3D printers without modifying adaptable models for physical health monitoring, and it uses tracking during the production process but rather afterward.

To track the leakage of the cyber database, Chhetri et al. studied the auditory side-channel of Fused Deposition Simulation (FDM) based on rapid prototyping equipment, such as 3D-printers [4]. This research differs from Chhetri's in the following ways: 1) Chhetri concentrated on boundary restoration, but this work focuses on infill problem; and 2) Chhetri employed sounds, whereas this work employs foresight. Furthermore, automatic identification assessment based on image categorization has been used in production processes for defect identification, like weld fault diagnosis, thorough check of metallic surfaces [5], defect object tracking in the hot rolling process [6], and so on. The focus of this research is on additive manufacturing technologies.

Identification of Malware Defects in the Additive Production Line

The CAD model, test file, the tool path file, and the machine itself are the four main vulnerabilities that can be exploited during the fabrication. Corruption/encryption, resizing, indents/protrusions, and vertices displacement and holes could be results of an assault [6-15].

At the start of the industrial process, file corruption/encryption can be discovered or identified. With a volumetric test, you may identify resizing, indents/protrusions, and vertex motion after the 3D printing and before the legislature. A void inside a 3D-printed piece, on the other hand, was seen because it might be encapsulated within the framework. The vacancy may cause a minor shift in density, but it is hard to determine. Modifications in the inner arrangement can lead to improvements in physical properties like strength and natural frequency, which can lead to poor performance. Unexpected vibrations, for example, might cause stiffness to deteriorate or possibly fail. During the fabrication, the infill structural change in the 3D printer has already been proved to be harmful and unnoticed. In this case, visual recognition during the procedure is performed [16-19].

A few of the possible infills are described below to use vision-based assessment during 3D printers to decrease inflammation in infill design. Demonstrates 8 infill variations with a 10% frequency using the Slicer 1.2.9 program. Hexagons include Concentric, Section, Rectilinear, Computing Curves, Elliptic Wire, and Octagram Waves.

There are more infill patterns than any of those illustrated in Figure 1, although the Hexagon (Hexagon) infill pattern is among the most prevalent in exercise, as Hales proved in 1999 [20,21]. This is also regarded to be the most economical reinforcement and the fastest to produce. Void, Irregular polygon, circle, rectangle, and triangle are 5 different infill defect patterns that can be used to depict deliberately broken portions. Depicts the five problems in the hexagonal reinforcement pattern.

Figure 1: 10 types of infill pattern with a density of 10%.

In this paper, the attack method is described as follows: the invasion can cause one or more infill product defects, as seen in Figure 2. The attack could be conducted without eliciting any signals from the firewall or solid wiring harness. The fault could be randomly spread throughout the component or concentrated in one region. The size of the fault may change automatically depending on the size of the faulty component. There is no manager to oversee the printing process whenever the filler is enclosed, so the issue cannot be assessed after the industrial process.

Figure 2: 4 Types of infill defect patterns.

Detection System

Preliminary design

A visual device is installed on a 3D printer to determine infill issues utilizing imagination laboratory tests. In this investigation, the patch flaws also were discovered from the top view of the work piece. On top of the 3D printer 4, we create an automated vision evaluation method.

Figure 3: Front, top, left, isometric view of camera location.

A webcam installed on top of the 3D image and placed near the intrusion was used to get a quick glimpse at the infill production process. In a specific context, a supplementary thermal ray can enhance picture quality. In a software simulation viewing scenario, photos can be collected quickly. Every piece of software has its preview, so we selected one that correctly simulates levels and routes. As depicted in Figure 3. The camera's location and viewpoint are depicted by the red triangle region in the front and left views. The camera's view is depicted by the rectangular grid in the center of the top image. As he examines the entire region of infill, the attacker's eyesight may move with him.

Image generation

Images are taken from the MakerBot Desktop 3.9.1 display feature in the 3D printing software. The photos are 512 by 512 pixels in size. A total of 156 photos are taken for the training and testing of the predictor.

A total of 65 non-defective parts photos are taken and classified as group A. From A01 to A65, each representation is obtained a number. To raise the level of training photos, non-defect group A images are recorded every 3-5 stages during the printing process, with infill thickness varying from 8% to 12%. Group B consists of 91 photos of faulty parts that have been labeled. From B01 to B91, each image is given a number. During the manufacturing process, the faulty group B photos are collected every 3-5 levels, with permutations of five separate flaws depicted in Figure 4. For group B, the infill frequency is 10%.

Figure 4: Grayscale plot between group A and B.

Feature Extraction

Python 2.7 is used to perform the image processing techniques. We can see repeating maxima, one moderate rise succeeded by one high peak, in couples, by showing the monochrome impact of non picture A01 row 250 (shown in red in Figure 4-1). By projecting the grayscale value of picture B30 row 250, we can see the repeating peaks in pair from Figure 4-2 at first but substituted by consistent turbulence.

We used a monochrome cutoff of 120 to define peaks. As a result, the peaks in Figures 4-1 and Figures 4-2 can be tallied to classify them. The average greyscale value in is 107, with a normal derivation of 17, and a total of 30 pixels monochrome bigger than the limit. The average greyscale value in is 120, with a normal derivation of 32, and a total of 129 pixels monochrome bigger than the boundary.

Each area is split into eight portions for image retrieval, as illustrated in Fig. 6. Each segment has 64 rows for a total of 32768 pixels. For defect categorization, the following features can be extracted.

•Halftones average in each area.

•Each part has a typical monochrome derivation.

•A monochrome pixel count greater than 120

In a conclusion, each image comprises 24 characteristics, divided into eight sections, each with three features.

Image classification algorithm

The Naive Bayes method is a classification algorithm that counts the occurrence and permutations of entries in a data set to create a set of possibilities. Compared to the value of the target class, the method assigns the Bayes theorem and implies that all qualities are stable. In practical applications, this prediction is rarely valid. In many support vector situations, though, the method continues to work well enough and learn quickly.

In a nutshell, naive Bayes is a feature vector theory in which an issue instance, denoted by a matrix, is categorized It builds a binary tree for you. In a classification task, the decision tree strategy is by far the most useful. A tree is built to represent the categorization process using this approach. Once the tree is constructed, it is matched to each item in the information, resulting in categorization [8].

X = (x1,..., xn ) representing some n features (independent variables), it assigns to this

instance probabilities p(Ck | x1,..., xn ) for each of K possible outcomes or classes in this

work, the conditional probability model is represented by a vector

X = (Mean1,..., Mean8 , SD1,..., SD8 , NP1,..., NP8 ) (1)

Where

Meanx1..8 stands for mean grayscale value for each image section 1 to section 8. Value

SDx1..8 stands for grayscale standard derivation value for each image section 1 to section 8.

Value NPx1..8 is the number of pixels in a grayscale image

For each picture segment 1 to 8, the value must be greater than the threshold. Also, apply to example possibilities in the same way. p( A | Mean1,..., Mean8, SD1,..., SD8, NP1,..., NP)

p(B | Mean1,..., Mean8 , SD1,..., SD8 , NP1,..., NP)

The probability distribution can be divided as follows using Bayes' theorem:

•Changed Words

•Structural Changes

•Thesaurus

P(CK| X ) P( X | CK )P(CK )P( X ) (2)

Excrete as follows in this job: CK A or B, the conditional probability can

P( A | X ) P( X | A)P( A) (3)

P( X ) P(B | X ) P( X | B)P(B) (4)

P( X ) In the Weka data mining tool, J48 is an open-source Java implementation of the C4.5 algorithm. It is a commonly accepted inlaying for actual artificial intelligence. Figure 5 flow Chart for system development.

Figure 5: Each image equally divided into 8 sections.

Experiment Results

With tenfold cross-validation, we fed information into a naïve Bayes classifier and J48 Decision trees. The results are summarized in Table 1, which also includes the discriminated function for both techniques. The efficiency of the Naive Bayes classifier is 85.26%, with a strong True Alarm Rate and a fewer detection negative rate. The reliability of the J48 decision trees is 95.51%, with a low error rate and a fewer detection negative rate.

Algorithm Navier Bayes J48 decision trees
Class Predicted class
A B A B
Actual A 63 1 61 3
Class B 22 70 4 88
Accuracy 85.26% 95.51%

With tenfold cross-validation, we fed information into a Naïve Bayes Classifier and J48 Decision Trees. The results are summarized in Table 1, which also includes the discriminated function for both techniques. The efficiency of the Naive Bayes classifier is 85.26%, with a strong true alarm rate and a fewer detection negative rate. The reliability of the J48 decision trees is 95.51%, with a low error rate and a fewer detection negative rate.

The Receiver Operating Characteristic (ROC) Curve for the naive Bayes classifier is shown in Figure 6, as well as the error function for the J48 decision tree is shown in Figure 7. Conspiring the True Positive Rate (TPR) against the False Positive Rate (FPR) yields the curve (FPR). The area under the ROC curve is used to determine accuracy.

Figure 6: ROC curve for naive Bayes classifier.

Figure 7: ROC curve for J48 decision tree.

The research was conducted out using a Windows 7 64-bit PC with an Intel Core i5-2520 M 2.5 GHz processor and 8 GB RAM. The image retrieval phase takes 47.17 seconds for 156 photos, averaging 0.3 seconds per picture, according to the processing time. The J48 Decision Trees and the Naive Bayes Classifier are both incredibly fast and can be used in real-time identification.

Conclusion

This paper proposes an artificial intelligence picture categorization approach for detecting 3D printing infill defects. The approach was thoroughly investigated in terms of functionality, feature selection, and implementation of the Naive Bayes classifier and J48 decision trees, respectively, with the efficiency of 85.26% and 95.51%. In the coming, the three goals can be achieved: 1) Use of this approach on some other types of filling, 2) Use of this technique in a physical world with 3D printers and sensors, 3) Use of this technique for 3D printing low-quality errors, and 4) Use of other methods to discover best Instagram.

References

international publisher, scitechnol, subscription journals, subscription, international, publisher, science

Track Your Manuscript

Awards Nomination

Media Partners