با همکاری انجمن علوم و صنایع غذایی ایران

نوع مقاله : مقاله پژوهشی

نویسندگان

گروه مکانیک بیوسیستم دانشگاه کشاورزی و منابع طیبعی رامین خوزستان.

چکیده

رنگ اولین ویژگی کیفیت مواد غذایی است که توسط مصرف کنندگان مورد بررسی قرار می گیرد. اندازه گیری رنگ مواد غذایی به‌عنوان یک شاخص غیرمستقیم در اندازه گیری دیگر ویژگی های کیفیتی مانند عطر و طعم و محتویات رنگدانه به دلیل سرعت و سادگی در اندازه گیری، و همچنین ارتباط با سایر خواص فیزیکی محصولات غذایی مورد استفاده قرار می گیرد. در میان فضاهای رنگی مختلف، عموما در اندازه گیری رنگ مواد غذایی، از فضای رنگی L*a*b* با توجه به توزیع یکنواخت و نزدیکی بسیار زیاد به ادراک انسان استفاده می شود. بطور کلی رنگ سنج های تجاری هنگام اندازه-گیری رنگ، سطح کوچکی از محصول را پوشش می دهند. در مقابل دوربین های دیجیتال اطلاعات پیکسلی را در اختیار کاربر قرار می دهند؛ این پژوهش یک راه‌حل محاسباتی به‌منظور استخراج واحدهای L*a*b* از اطلاعات پیکسلی تصاویر RGB دیجیتال را ارائه می دهد. در این مطالعه به‌منظور تبدیل واحدهای RGB به L*a*b* از چهار مدل: خطی، درجه دوم، شبکه عصبی مصنوعی (ANN) و رگرسیون بردار پشتیبانی (SVR) استفاده گردید. در ارزیابی مدل ها، رگرسیون بردار پشتیبانی و مدل شبکه عصبی به ترتیب با خطای 88/0 و 37/2 بهترین عملکرد را از خود نشان دادند. با توجه به مدل های شکل گرفته، ارتباط خوبی بین رنگ اندازه گیری و برآورد شده تشکیل شده بود. بنابراین، بر اساس نتایج بدست آمده از بینایی ماشین، روش توصیه شده در این پژوهش برای تبدیل دقیق رنگ یک محصول غذایی از روی اطلاعات پیکسلی یک دوربین دیجیتال به L*a*b* مناسب می باشد.

کلیدواژه‌ها

عنوان مقاله [English]

Computational estimation of L*a*b* units from RGB using machine vision

نویسندگان [English]

  • Saman Abdanan
  • Somaye Amraei

Department of Mechanics of Biosystems Engineering, Faculty of Agricultural Engineering and Rural Development, Ramin University of Agriculture and Natural Resources of Khuzestan, Iran.

چکیده [English]

Introduction:.Color is the first quality attribute of food evaluated by consumers, and is therefore an important quality component of food which influences consumer’s choice and preferences (Maguire, 1994). Color measurement of food products has been used as an indirect measure of other quality attributes such as flavor and contents of pigments because it is simpler, faster and correlates well with other physicochemical properties. Therefore, rapid and objective measurement of food color is required in quality control for the commercial grading of products (Trusell et al., 2005). Among different color spaces, L*a*b* color space is the most practical system used for measuring of color in food due to the uniform distribution of colors in this system as well its high similarity to human perception of color. All of the commercial L*a*b* colorimeters generally measure small, non- representative areas (Pathare et al., 2013) while the RGB digital cameras obtain information in pixels. Therefore, this research establishes a computational solution which allows acquiring of digital images in L*a*b* color units for each pixel from the digital RGB image (Fernandez-Vazquez et al., 2011). In recent years, computer vision has been used to objectively measure the color of different foods since they provide some obvious advantages over a conventional colorimeter, namely, the possibility of analyzing of each pixel of the entire surface of the food, and quantifying surface characteristics and defects (Mendoza & Aguilera, 2004). The color of many foods has been measured using computer vision techniques (Pedreschi et al., 2011; Lang et al., 2012). A computational technique with a combination of a digital camera, image processing software has been used to provide a less expensive and more versatile way to measure the color of many foods than traditional color-measuring instruments. This study used four models to carry out the RGB to L*a*b* transformation: linear, quadratic, support vector regression and neural network. This article presents the details of each model, their performance, and their advantages and disadvantages. The purpose of this work was to find a model (and estimate its parameters) for obtaining L*a*b* color measurements from RGB measurements. Materials and Methods: The images used in this work were taken with the following image acquisition system (Samsung, SM-N9005 color digital camera with 13 Mega pixels of resolution ,Fig.1). The camera was placed vertically at a distance of 60 cm from the samples and the angle between the axis of the lens and the sources of illumination was approximately °45. Illumination was achieved with 4 natural daylight 150 W lights. Fig. 1. Schematic diagram of image acquisition system. In order to calibrate the digital color system, the color values of 42 color charts were measured. Each color chart was divided into 24 regions. In each region, the L*a*b* color values were measured using a Minolta colorimeter. Additionally, a RGB digital image was taken of each chart, and the R, G and B color values of the corresponding regions were measured using a Matlab program which computes the mean values for each color value in each region according to the 24 masks. In this study four models for the RGB to L*a*b* transformation namely: linear, quadratic, artificial neural network (ANN), support vector regression (SVR) have been used. Results and discussion: In the evaluation of the performance of the models, the support vector regression and neural network model stands out with an error of only 0.88 and 2.37, respectively. Leon et al. (2004) investigated some models for the RGB to L*a*b* conversion. In the evaluation of the performance of the models, the neural network model showed an error of only 0.93%. In another research Yagzi et al. (2009) measured the L*a*b* values of atlantic salmon fillets subjected to different electron beam doses (0, 1, 1.5, 2 and 3 kGy) using a Minolta CR-200 Chroma Meter and a machine vision system. For both Minolta and machine vision the L* value increased and the a* and b* values decreased with increasing irradiation dose. However, the machine vision system showed significantly higher readings for L*, a*, b* values than the Minolta colorimeter. According to the construction of these models, the correlation between measured and predicted color is well established; therefore, based on the promising obtained results from Computer vision, it is possible to find a L*a*b* color measuring system that is appropriate for an accurate, exacting and detailed characterization of a food item based on a color digital camera. In order to show the capability of the proposed method, the color of an orange was measured using both a Minolta colorimeter and the studied approach. The colorimeter measurement was obtained by averaging 6 measurements in 6 different places of the surface of the orange, whereas the measurement using the digital color image was estimated by averaging all pixels of the surface image. The results are summarized in Fig. 2. b* a* L* Measurement Method 35.49 28.32 58.98 Minolta colorimeter 37.35 27.30 61.20 Machine Vision (SVR) 30.60 30.19 60.18 Machine Vision (ANN) Fig. 2. Estimate of L*a*b* values of an orange

کلیدواژه‌ها [English]

  • Color
  • RGB
  • L*a*b*
  • ANN
  • SVR
آبدانان مهدیزاده، س.، 1395 . تشخیص ترک در پوسته تخممرغ با استفاده از PCA و SVM. مجله علوم و صنایع غذایی ایران، 56(13)، 143-153.
ناصحی، ب.، 1392، بررسی روش های مختلف ارزیابی رنگ در اسپاگتی، نشریه پژوهشهای صنایع غذایی، (1)23، 47-57.
Abdanan Mehdizadeh, S., Minaei, S., Hancock, N. H. & Karimi Torshizi M. A., 2014. An intelligent system for egg quality classification based on visible-infrared transmittance spectroscopy. Information Processing in Agriculture, 1, 105-114.
Abdanan Mehdizadeh, S., Sandell, G., Golpour, A. & Karimi Torshizi M. A., 2014. Early Determination of Pharaoh Quail Sex after Hatching Using Machine Vision. Bulletin of Environment, Pharmacology and Life Sciences, 1, 105-114.
Alonso, J., Castanon, A. R., & Bahamonde, A., 2013. Support Vector Regression to predict carcass weight in beef cattle in advance of the slaughter. Computers and Electronics in Agriculture, 91, 116-120.
Craninx, M., Fievez, V., Vlaeminck, B., & De Baets, B., 2008. Artificial neural network models of the rumen fermentation pattern in dairy cattle. Computers and Electronics in Agriculture, 60(2), 226-238.
Fernandez-Vazquez, R., Stinco, C. M., Melendez-Martinez, A. J., Heredia, F. J., & Vicario, I. M., 2011. Visual and instrumental evaluation of orange juice color: a consumers’ preference study. Journal of Sensory Studies, 26, 436-444.
Hardeberg, J. Y., Schmitt, F., Tastl, I., Brettel, H., & Crettez, J.-P., 1996. In Proceedings of 4th Color Imaging Conference: Color Science, Systems and Applications, Scottsdale, Arizona, Nov, pp. 108-113.
Hornick, K., Stinchcombe, M., & White, H., 1989. Multilayer feedforward networks are universal approximators. Neural Networks, 2, 359-366.
Ilie, A., & Welch, G., 2005. Ensuring color consistency across multiple cameras. In Proceedings of the tenth IEEE international conference on computer vision (ICCV-05), Vol. 2, 17–20 Oct (pp. 1268-1275).
Jam, L., & Fanelli, A. M., 2000. Recent advances in artificial neural networks design and applications, CRC Press.
Khanna, T., 1990. Foundations of Neural Networks, Addison-Wesley Publishing Company.
Lang, C., & Hübert, T., 2012. A color ripeness indicator for apples. Food and Bioprocess Technology, 5(8), 3244-3249.
Leon, K., Mery, D., Pedreschi, F., & Leon, J., 2006. Color measurement in L*a* b* units from RGB digital images. Food research international, 39(10), 1084-1091.
Lolas, S., & Olatunbosun, O. A., 2008. Prediction of vehicle reliability performance using artificial neural networks. Expert Systems with Applications, 34(4), 2360-2369.
Maguire, K., 1994. Perceptions of meat and food: Some implications for health promotion strategies. British Food Journal, 96(2), 11-17.
Mancini, R. A., & Hunt, M. C., 2005. Current research in meat color. Meat Science, 71(1), 100-121.
Mendoza, F., & Aguilera, J. M., 2004. Application of image analysis for classification of ripening bananas. Journal of Food Science, 69, 471-477.
Paschos, G.,2001. Perceptually uniform color spaces for color texture analysis: an empirical evaluation. IEEE Transactions on Image Processing, 10(6), pp.932-937.
Pathare, P. B., Opara, U. L., & Al-Said, F. A. J., 2013. Color measurement and analysis in fresh and processed foods: a review. Food and Bioprocess Technology, 6(1), 36-60.
Pedreschi, F., Mery, D., Bunger, A., & Yanez, V., 2011. Computer vision classification of potato chips by color. Journal of Food Process Engineering, 34, 1714-1728.
Stoderstrom, T., & Stoica, P., 1989. System identification. New York: Prentice-Hall.
Tripathy, P. P., & Kumar, S., 2009. Neural network approach for food temperature prediction during solar drying. International Journal of Thermal Sciences, 48(7), 1452-1459.
Trusell, H. J., Saber, E., & Vrhel, M., 2005. Color image processing, IEEE Signal Processing Magazine, 22(1), 14-22.
Vapnik, V.N., 1998. Statistical Learning Theory. Wiley-Interscience, New York.
Wu, D., & Sun, D. W., 2013. Color measurements by computer vision for food quality control–A review. Trends in Food Science & Technology, 29(1), 5-20.
Yagiz, Y., Balaban, M. O., Kristinsson, H. G., Welt, B. A., & Marshall, M. R., 2009. Comparison of Minolta colorimeter and machine vision system in measuring colour of irradiated Atlantic salmon. Journal of the Science of Food and Agriculture, 89, 728-730.
Yam, K. L., & Papadakis, S., 2004. A simple digital imaging method for measuring and analyzing color of food surfaces. Journal of Food Engineering, 61, 137-142.
Zapotoczny, P., & Majewska, K., 2010. A comparative analysis of colour measurements of the seed coat and endosperm of wheat kernels performed by various techniques. International Journal of Food Properties, 13, 75-89
CAPTCHA Image