Proceedings of International Conference on Applied Innovation in IT
2022/03/09, Volume 10, Issue 1, pp.51-59

Robotic System Operation Specification on the Example of Object Manipulation

Leonid Mylnikov, Pavel Slivnitsin and Anna Mylnikova

Abstract: Currently, robotic system tasks are formalized with help of procedural programming languages that do not take into account the specificity of robots and are not generic in their application. The goal of the paper is to develop a method of semantic description of the sequence of operations performed by a robotic system on the example of object manipulation around them. To achieve the goal, a method of a graphical representation of a robotic system operation specification and its semantic description (metalanguage) are proposed. The paper considers the approaches to the objects’ representation, determines the way object characteristics are stored, and provides the list of possible operations with objects. The obtained methods of graphical and semantic robotic system operation specification allow to assign the task without being bound to a specific technical solution. In addition, the paper provides the examples of operation assignments for the robotic arm.

Keywords: Function Modeling, Diagram, Metalanguage, Recognition, Identification, Positioning, SLAM, Computer Vision, Robotic System.Function Modeling, Diagram, Metalanguage, Recognition, Identification, Positioning, SLAM, Computer Vision, Robotic System

DOI: 10.25673/76932

Download: PDF


  1. Fanuc Europe,“Robot industrial applications.” [Online]. Available: [Accessed: 01-Jun-2021].
  2. A. Delfanti and B. Frey, “Humanly ExtendedAutomation or the Future of Work Seenthrough Amazon Patents,” Sci. Technol.Hum. Values, vol. 46, no. 3, pp. 655–682,2021.
  3. P. Slivnitsin, A. Bachurin, and L. Mylnikov,“Robotic system position control algorithmbased on target object recognition,” inProceedings of International Conference onApplied Innovation in IT, 2020, vol. 8, no. 1,pp. 87–94.
  4. A. Zeng et al., “Multi-view self-superviseddeep learning for 6D pose estimation in theAmazon Picking Challenge,” Proc. - IEEEInt. Conf. Robot. Autom., pp. 1386–1393,2017.
  5. A. Zeng, S. Song, J. Lee, A. Rodriguez, andT.Funkhouser, “TossingBot: Learning toThrow Arbitrary Objects with ResidualPhysics,” IEEE Trans. Robot., vol. 36, no. 4,pp. 1307–1319, 2020.
  6. A. Novikova, “Direct Machine Translationand Formalization Issues of LanguageStructures and Their Matches by AutomatedMachine Translation for the Russian-EnglishLanguage Pair,” in Proceedings of International Conference on Applied Innovation in IT, 2018, pp. 85–92.
  7. P. Thompson, “Margaret Thatcher: A NewIllusion,” Perception, vol. 9, no. 4, pp. 483–484, Aug. 1980.
  8. R. M. Nosofsky, “The generalized contextmodel: an exemplar model of classification,”Form. Approaches Categ., pp. 18–39, 2012.
  9. E. Rosch, “Cognitive representations ofsemantic categories.,” J. Exp. Psychol. Gen.,vol. 104, no. 3, pp. 192–233, Sep. 1975.
  10. P. G. Neumann, “Visual prototype formationwith discontinuous representation ofdimensions of variability,” Mem. Cognit.,vol. 5, no. 2, pp. 187–197, Mar. 1977.
  11. D. G. Lowe, “Object Recognition from LocalScale-Invariant Features,” in Proceedings ofthe Seventh IEEE International Conferenceon Computer Vision, 1999, p. 8.
  12. H. Bay, T. Tuytelaars, and L. Van Gool,“SURF: Speeded up robust features,” Lect.Notes Comput. Sci. (including Subser. Lect.Notes Artif. Intell. Lect. NotesBioinformatics), vol. 3951 LNCS, no. July2006, pp. 404–417, 2006.
  13. F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain,” Psychol. Rev., vol. 65, no. 6, pp. 386–408, 1958.
  14. O. G. Selfridge, “Pandemonium: a paradigmfor learning,” in Proceedings on theSymposium on Mechanisation of ThoughtProcesse, 1959, pp. 511–529.
  15. N. Dalal and B. Triggs, “Histograms oforiented gradients for human detection,”Proc. - 2005 IEEE Comput. Soc. Conf.Comput. Vis. Pattern Recognition, CVPR2005, vol. I, no. 16, pp. 886–893, 2005.
  16. P. Viola and M. Jones, “Rapid ObjectDetection using a Boosted Cascade of SimpleFeatures,” in Proceedings IEEE Conf. onComputer Vision and Pattern Recognition,2001, pp. 511–518.
  17. Y. Lecun, L. Bottou, Y. Bengio, andP.Haffner, “Gradient-based learning appliedto document recognition,” Proc. IEEE, vol.86, no. 11, pp. 2278–2324, 1998.
  18. H. Lee, R. Grosse, R. Ranganath, andA. Y. Ng, “Unsupervised learning ofhierarchical representations with convolutional deep belief networks,” Commun. ACM, vol. 54, no. 10, pp. 95–103, 2011.
  19. A. E. Kononyuk, Obshchaya teoriyaraspoznavaniya. Matematicheskiye sredstvaopisaniya raspoznavayemykh obyektov iraspoznayushchikh protsessov. Kiyev. 2012.
  20. P. J. Diggle and J. Serra, “Image Analysis andMathematical Morphology.,” Biometrics,vol. 39, no. 2, p. 536, Jun. 1983.
  21. Y. V. Vizilter, Y. P. Pyt’ev, A. I. Chulichkov,and L. M. Mestetskiy, “Morphological ImageAnalysis for Computer Vision Applications,”2015, pp. 9–58.
  22. D. Marr and T. Poggio, “A computationaltheory of human stereo vision,” Proc. R. Soc.London - Biol. Sci., vol. 204, no. 1156, pp.301–328, 1979.
  23. D. Marr and H. K. Nishihara,“Representation and recognition of the spatialorganization of three-dimensional shapes,”Proc. R. Soc. London. Ser. B. Biol. Sci., vol.200, no. 1140, pp. 269–294, Feb. 1978.
  24. D. Marr and L. Vaina, “Representation andrecognition of the movements of shapes,”Proc. R. Soc. London. Ser. B. Biol. Sci., vol.214, no. 1197, pp. 501–524, Mar. 1982.
  25. I. Biederman, “Recognition-by-Components:A Theory of Human Image Understanding,”Psychol. Rev., vol. 94, no. 2, pp. 115–147,1987.
  26. I. Biederman, “Matching Image Edges ToObject Memory.” pp. 384–392, 1987.
  27. S. Jin et al., “Whole-Body Human PoseEstimation in the Wild,” Lect. Notes Comput.Sci. (including Subser. Lect. Notes Artif.Intell. Lect. Notes Bioinformatics), vol.12354 LNCS, pp. 196–214, Jul. 2020.
  28. T. Simon, H. Joo, I. Matthews, and Y. Sheikh,“Hand keypoint detection in single imagesusing multiview bootstrapping,” Proc. - 30thIEEE Conf. Comput. Vis. PatternRecognition, CVPR 2017, vol. 2017-Janua,pp. 4645–4653, 2017.
  29. L. Jiao et al., “A survey of deep learning-based object detection,” IEEE Access, vol. 7,no. 3, pp. 128837–128868, 2019.
  30. L. A. Mylnikov, Statisticheskiye metody intellekttsualnogo analiza dannykh. SPb.:BKhV-Peterburg. 2021.
  31. D. Lemire and A. Maclachlan, “Slope One Predictors for Online Rating-Based Collaborative Filtering,” SIAM DataMin. (SDM’05), Newport Beach, California,April 21-23, 2005.
  32. D. Vershinin and L. Mylnikov, “A review andcomparison of mapping and trajectoryselection algorithms,” Proc. Int. Conf. Appl.Innov. IT, vol. 9, no. 1, pp. 85–92, 2021.
  33. G. F. Luger, Artificial Intelligence:Structures and Strategies for ComplexProblem Solving, vol. 5th. 2005.



       - Committees
       - Proceedings


       - Volume 10, Issue 1 (ICAIIT 2022)
       - Volume 9, Issue 1 (ICAIIT 2021)
       - Volume 8, Issue 1 (ICAIIT 2020)
       - Volume 7, Issue 1 (ICAIIT 2019)
       - Volume 7, Issue 2 (ICAIIT 2019)
       - Volume 6, Issue 1 (ICAIIT 2018)
       - Volume 5, Issue 1 (ICAIIT 2017)
       - Volume 4, Issue 1 (ICAIIT 2016)
       - Volume 3, Issue 1 (ICAIIT 2015)
       - Volume 2, Issue 1 (ICAIIT 2014)
       - Volume 1, Issue 1 (ICAIIT 2013)


       ICAIIT 2022
         - Message

       ICAIIT 2021
         - Photos
         - Reports

       ICAIIT 2020
         - Photos
         - Reports

       ICAIIT 2019
         - Photos
         - Reports

       ICAIIT 2018
         - Photos
         - Reports





           ISSN 2199-8876
           Copyright © 2013-2021 Leonid Mylnikov, © 2022 at Anhalt University of Applied Sciences. All rights reserved.