Background: Recording skin lesions in slaughtered pigs is useful to assess animal welfare. Objectives: Training a convolutional neural network (CNN) to detect ear lesions at slaughter. Methods: A total of 1415 pictures (dataset) were randomly collected along the slaughter chain. Carcasses were photographed after passing through the scalding tank, flaming and brushing. Each image included the external surface of both ears and was evaluated by two veterinarians, who agreed to classify pinnae as healthy or diseased. Moreover, ears ripped after brushing were classified as unsuitable. Thereafter, 1000 images were used to train an open-source CNN (TensorFlow, available at https://keras.io) to detect any change of the auricle silhouette. The remaining 415 images were used to test CNN’s accuracy, when comparing its predictions with veterinarians’ assessment. Results: Veterinarians classified 78.2% ear pinnae as healthy, 17.8% as diseased and 4.0% as unsuitable. Overall, CNN’s accuracy was 95,06%. However, CNN was much less accurate to correctly identify unsuitable ears (59,45%). More in detail, CNN correctly recognized ear pinnae affected by severe artifacts (22 out of 37), while it mistakenly identified as healthy (n = 7) or diseased (n = 8) the remaining 15 unsuitable ears. Conclusions: This study highlights that open-source, artificial intelligence-based tools could effectively fulfill well-targeted tasks. Artifacts represented the greatest concern, as they were the main responsible of mistakes. We consider such an issue could be solved by increasing the dataset and/or taking pictures before brushing, whether allowed by the slaughter chain facility.
Detecting ear lesions in slaughtered pigs using Artificial Intelligence
Matteo D’Angelo;Domenico Sciota;Jasmine Hattab;Anastasia Romano;Alberto Olivastri;Giuseppe Marruchella
2024-01-01
Abstract
Background: Recording skin lesions in slaughtered pigs is useful to assess animal welfare. Objectives: Training a convolutional neural network (CNN) to detect ear lesions at slaughter. Methods: A total of 1415 pictures (dataset) were randomly collected along the slaughter chain. Carcasses were photographed after passing through the scalding tank, flaming and brushing. Each image included the external surface of both ears and was evaluated by two veterinarians, who agreed to classify pinnae as healthy or diseased. Moreover, ears ripped after brushing were classified as unsuitable. Thereafter, 1000 images were used to train an open-source CNN (TensorFlow, available at https://keras.io) to detect any change of the auricle silhouette. The remaining 415 images were used to test CNN’s accuracy, when comparing its predictions with veterinarians’ assessment. Results: Veterinarians classified 78.2% ear pinnae as healthy, 17.8% as diseased and 4.0% as unsuitable. Overall, CNN’s accuracy was 95,06%. However, CNN was much less accurate to correctly identify unsuitable ears (59,45%). More in detail, CNN correctly recognized ear pinnae affected by severe artifacts (22 out of 37), while it mistakenly identified as healthy (n = 7) or diseased (n = 8) the remaining 15 unsuitable ears. Conclusions: This study highlights that open-source, artificial intelligence-based tools could effectively fulfill well-targeted tasks. Artifacts represented the greatest concern, as they were the main responsible of mistakes. We consider such an issue could be solved by increasing the dataset and/or taking pictures before brushing, whether allowed by the slaughter chain facility.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


