يعرض 1 - 10 نتائج من 104 نتيجة بحث عن '"Human activity"', وقت الاستعلام: 1.30s تنقيح النتائج
  1. 1
    دورية أكاديمية

    المصدر: Pädi Boletín Científico de Ciencias Básicas e Ingenierías del ICBI; Vol 12 No Especial (2024): Pädi Boletín Científico de Ciencias Básicas e Ingenierías del ICBI; 50-56 ; Pädi Boletín Científico de Ciencias Básicas e Ingenierías del ICBI; Vol. 12 Núm. Especial (2024): Pädi Boletín Científico de Ciencias Básicas e Ingenierías del ICBI; 50-56 ; 2007-6363 ; 10.29057/icbi.v12iEspecial

    وصف الملف: application/pdf

  2. 2
    دورية أكاديمية
  3. 3
    كتاب
  4. 4
    دورية أكاديمية
  5. 5
    دورية أكاديمية
  6. 6
    دورية أكاديمية

    المصدر: Tecnura Journal; Vol. 26 No. 74 (2022): October - December ; 213-236 ; Tecnura; Vol. 26 Núm. 74 (2022): Octubre - Diciembre ; 2248-7638 ; 0123-921X

    وصف الملف: application/pdf; text/xml

    العلاقة: https://revistas.udistrital.edu.co/index.php/Tecnura/article/view/17413/18508Test; https://revistas.udistrital.edu.co/index.php/Tecnura/article/view/17413/18591Test; Adhikari, K., Bouchachia, H., & Nait-Charif, H. (2017, May 8-12). Activity recognition for indoor fall detection using convolutional neural network [Conference presentation]. 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA). Nagoya, Japan. https://doi.org/10.23919/MVA.2017.7986795Test; Akhavian, R., & Behzadan, A. H. (2016). Smartphone-based construction workers’ activity recognition and classification. Automation in Construction, 71(Part 2), 198-209. https://doi.org/10.1016/j.autcon.2016.08.015Test; Amiri, S. M., Pourazad, M. T., Nasiopoulos, P., & Leung, V. C. M. (2014). Improved human action recognition in a smart home environment setting. IRBM, 35(6), 321-328. https://doi.org/10.1016/j.irbm.2014.10.005Test; Auvinet, E., Rougier, C., Meunier, J., St-Arnaud, A., & Rousseau, J. (n.d.). Multiple cameras fall dataset. http://www.iro.umontreal.ca/~labimage/DatasetTest/; Auvinet, E., Multon, F., Saint-Arnaud, A., Rousseau, J., & Meunier, J. (2011). Fall detection with multiple cameras: An occlusion-resistant method based on 3-D silhouette vertical distribution. IEEE Transactions on Information Technology in Biomedicine, 15(2), 290-300. https://doi.org/10.1109/TITB.2010.2087385Test; Avci, A., Bosch, S., Marin-Perianu, M., Marin-Perianu, R., & Havinga, P. (2010, February 22-25). Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey [Conference presentation]. 23th International Conference on Architecture of Computing Systems, Hannover, Germany. https://ieeexplore.ieee.org/document/5759000Test; Banos, O., Damas, M., Pomares, H., Prieto, A., & Rojas, I. (2012). Daily living activity recognition based on statistical feature quality group selection. Expert Systems with Applications, 39(9), 8013-8021. https://doi.org/10.1016/j.eswa.2012.01.164Test; Barbosa-Chacón, J. W., Barbosa-Herrera, J. C., & Rodríguez-Villabona, M. (2013). Revision y análisis documental para estado del arte: una propuesta metodológica desde el contexto de la sistematización de experiencias educativas. Scielo Analytics, 27, 83-105. https://doi.org/10.1016/S0187-358XTest(13)72555-3; Ben Mabrouk, A., & Zagrouba, E. (2018). Abnormal behavior recognition for intelligent video surveillance systems: A review. Expert Systems with Applications, 91, 480-491. https://doi.org/10.1016/j.eswa.2017.09.029Test; Berlin, S. J., & John, M. (2016, October 24-27). Human interaction recognition through deep learning network [Conference presentation]. 2016 IEEE International Carnahan Conference on Security Technology (ICCST), Orlando, FL, USA. https://doi.org/10.1109/CCST.2016.7815695Test; Brophy, E., Domínguez-Veiga, J. J., Wang, Z., & Ward, T. E. (2018, June 21-22). A machine vision approach to human activity recognition using photoplethysmograph sensor data [Conference presentation]. 2018 29th Irish Signals and Systems Conference (ISSC), Belfast, UK. https://doi.org/10.1109/ISSC.2018.8585372Test; Cai, X., Liu, X., Li, S., & Han, G. (2019, October 16-19). Fall detection based on colorization coded MHI combining with convolutional neural network [Conference presentation]. 2019 IEEE 19th International Conference on Communication Technology (ICCT), Xi'an, China. https://doi.org/10.1109/ICCT46805.2019.8947223Test; Chakraborty, B., Holte, M. B., Moeslund, T. B., and González, J. (2012). Selective spatio-temporal interest points. Computer Vision and Image Understanding, 116(3), 396-410. https://doi.org/10.1016/j.cviu.2011.09.010Test; Charfi, I., Miteran, J., Dubois, J., Atri, M., & Tourki, R. (2013). Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboost-based classification. Journal of Electronic Imaging, 22(4), 041106. https://doi.org/10.1117/1.JEI.22.4.041106Test; Chen, L., Nugent, C. D., & Wang, H. (2012). A knowledge-driven approach to activity recognition in smart homes. IEEE Transactions on Knowledge and Data Engineering, 24(6), 961-974. https://doi.org/10.1109/TKDE.2011.51Test; Computer Vision Department of the MICA International Research Institute & Posts & Telecommunications Institute of Technology (COMVIS-PTIT) (n.d.). Continuous multimodal multi-view dataset of human fall (CMDFALL). https://www.mica.edu.vn/perso/Tran-Thi-Thanh-Hai/CMDFALL.htmlTest; Concone, F., Re, G. Lo, & Morana, M. (2019). A fog-based application for human activity recognition using personal smart devices. ACM Transactions on Internet Technology, 19(2), 1-20. https://doi.org/10.1145/3266142Test; Contreras-Contreras, G. F., Medina-Delgado, B., Acevedo-Jaimes, B. R., & Guevara-Ibarra, D. (2022). Metodología de desarrollo de técnicas de agrupamiento de datos usando aprendizaje automático. Tecnura, 26(72), 42-58. https://doi.org/10.14483/22487638.17246Test; Cosar, S., Donatiello, G., Bogorny, V., Garate, C., Alvares, L. O., & Bremond, F. (2017). Toward abnormal trajectory and event detection in video surveillance. IEEE Transactions on Circuits and Systems for Video Technology, 27(3), 683-695. https://doi.org/10.1109/TCSVT.2016.2589859Test; Das Dawn, D., & Shaikh, S. H. (2016). A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector. The Visual Computer, 32(3), 289-306. https://doi.org/10.1007/s00371-015-1066-2Test; Debard, G., Mertens, M., Deschodt, M., Vlaeyen, E., Devriendt, E., Dejaeger, E., Milisen, K., Tournoy, J., Croonenborghs, T., Goedemé, T. Tuytelaars, T., & Vanrumste, B. (2016). Camera-based fall detection using real-world versus simulated data: How far are we from the solution? Journal of Ambient Intelligence and Smart Environments, 8(2) 149-168. https://doi.org/10.3233/AIS-160369Test; Durrant-Whyte, H., Roy, N., & Abbeel, P. (2012). Robotics: Science and Systems VII. MIT Press.; Efros, Berg, Mori, & Malik. (2003, October 13-16). Recognizing action at a distance [Conference presentation]. 9th IEEE International Conference on Computer Vision, Nice, France. https://doi.org/10.1109/ICCV.2003.1238420Test; El Kaid, A., Baïna, K., & Baïna, J. (2019). Reduce false positive alerts for elderly person fall video-detection algorithm by convolutional neural network model. Procedia Computer Science, 148, 2-11. https://doi.org/10.1016/j.procs.2019.01.004Test; Elbasiony, R., & Gomaa, W. (2020). A survey on human activity recognition based on temporal signals of portable inertial sensors. In A. E. Hassanien, A. T. Azar, T. Gaber, R. Bhatnagar, & M. F. Tolba (Eds.), The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2019) (pp. 734-745). Springer. https://doi.org/10.1007/978-3-030-14118-9_72Test; Espinosa, R., Ponce, H., Gutiérrez, S., Martínez-Villaseñor, L., Brieva, J., & Moya-Albor, E. (2019). A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the UP-Fall detection dataset. Computers in Biology and Medicine, 115, 103520. https://doi.org/10.1016/j.compbiomed.2019.103520Test; Fan, Y., Levine, M. D., Wen, G., & Qiu, S. (2017). A deep neural network for real-time detection of falling humans in naturally occurring scenes. Neurocomputing, 260, 43-58. https://doi.org/10.1016/j.neucom.2017.02.082Test; Foroughi, H., Aski, B. S., & Pourreza, H. (2008). Intelligent video surveillance for monitoring fall detection of elderly in home environments [Conference presentation]. 2008 11th International Conference on Computer and Information Technology, Khulna, Bangladesh. https://doi.org/10.1109/ICCITECHN.2008.4803020Test; Goudelis, G., Tsatiris, G., Karpouzis, K., & Kollias, S. (2015). Fall detection using history triple features. In ACM (Eds.), Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments - PETRA ’15 (art. 81). ACM Press. https://doi.org/10.1145/2769493.2769562Test; Han, J., Shao, L., Xu, D., & Shotton, J. (2013). Enhanced computer vision with Microsoft Kinect sensor: A review. IEEE Transactions on Cybernetics, 43(5), 1318-1334. https://doi.org/10.1109/TCYB.2013.2265378Test; Harris, C., & Stephens, M. (1988). A combined edge and corner detector. In C. J. Taylor (Ed.), Proceedings of the Alvey Vision Conference (pp. 23.1-23.6). Alvey Vision Club.; Hassan, M. M., Uddin, M. Z., Mohamed, A., & Almogren, A. (2018). A robust human activity recognition system using smartphone sensors and deep learning. Future Generation Computer Systems, 81, 303-313. https://doi.org/10.1016/j.future.2017.11.029Test; Hbali, Y., Hbali, S., Ballihi, L., & Sadgal, M. (2018). Skeleton‐based human activity recognition for elderly monitoring systems. IET Computer Vision, 12(1), 16-26. https://doi.org/10.1049/iet-cvi.2017.0062Test; He, K., Zhang, X., Ren, S., & Sun, J. (2016, June 27-30). Deep residual learning for image recognition [Conference presentation]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA. https://doi.org/10.1109/CVPR.2016.90Test; Hsieh, J.-W., Chuang, C.-H., Alghyaline, S., Chiang, H.-F., & Chiang, C.-H. (2014). Abnormal scene change detection from a moving camera using bags of patches and spider-web map. IEEE Sensors Journal, 15(5), 2866-2881. https://doi.org/10.1109/JSEN.2014.2381257Test; Hsieh, Y.-Z., & Jeng, Y.-L. (2018). Development of home intelligent fall detection iot system based on feedback optical flow convolutional neural network. IEEE Access, 6, 6048-6057. https://doi.org/10.1109/ACCESS.2017.2771389Test; Ismail, S. J., Rahman, M. A. A., Mazlan, S. A., & Zamzuri, H. (2015, October 18-20). Human gesture recognition using a low cost stereo vision in rehab activities [Conference presentation]. 2015 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Langkawi, Malaysia. https://doi.org/10.1109/IRIS.2015.7451615Test; Jalal, A., Kim, Y.-H., Kim, Y.-J., Kamal, S., & Kim, D. (2017). Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern Recognition, 61, 295-308. https://doi.org/10.1016/j.patcog.2016.08.003Test; Jalal, A., Uddin, M. Z., Kim, J. T., & Kim, T.-S. (2012). Recognition of human home activities via depth silhouettes and ℜ transformation for smart homes. Indoor and Built Environment, 21(1), 184-190. https://doi.org/10.1177/1420326X11423163Test; Kahani, R., Talebpour, A., & Mahmoudi-Aznaveh, A. (2019). A correlation based feature representation for first-person activity recognition. Multimedia Tools and Applications, 78, 21673-21694. https://doi.org/10.1007/s11042-019-7429-3Test; Keceli, A. S., & Burak Can, A. (2013, April 24-26). Recognition of human actions by using depth information [Conference presentation]. 2013 21st Signal Processing and Communications Applications Conference (SIU), Haspolat, Turkey. https://doi.org/10.1109/SIU.2013.6531211Test; Khan, Z. A., & Sohn, W. (2011). Abnormal human activity recognition system based on R-transform and kernel discriminant technique for elderly home care. IEEE Transactions on Consumer Electronics, 57(4), 1843-1850. https://doi.org/10.1109/TCE.2011.6131162Test; Khan, Z. A., & Sohn, W. (2013). A hierarchical abnormal human activity recognition system based on R-transform and kernel discriminant analysis for elderly health care. Computing, 95(2), 109-127. https://doi.org/10.1007/s00607-012-0216-xTest; Khraief, C., Benzarti, F., & Amiri, H. (2019). Convolutional Neural network based on dynamic motion and shape variations for elderly fall detection. International Journal of Machine Learning and Computing, 9(6), 814-820. https://doi.org/10.18178/ijmlc.2019.9.6.878Test; Khraief, C., Benzarti, F., & Amiri, H. (2020). Elderly fall detection based on multi-stream deep convolutional networks. Multimedia Tools and Applications, 79, 19537-19560. https://doi.org/10.1007/s11042-020-08812-xTest; Kim, E., Helal, S., & Cook, D. (2010). Human activity recognition and pattern discovery. IEEE Pervasive Computing, 9(1), 48-53. https://doi.org/10.1109/MPRV.2010.7Test; Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). ImageNet classification with deep convolutional neural networks [Conference presentation]. 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA. https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdfTest; Kwolek, B., & Kepski, M. (2014). Human fall detection on embedded platform using depth maps and wireless accelerometer. Computer Methods and Programs in Biomedicine, 117(3), 489-501. https://doi.org/10.1016/j.cmpb.2014.09.005Test; Laptev, I., & Lindeberg, T. (2003, October 13-16). Space-time interest points [Conference presentation]. Ninth IEEE International Conference on Computer Vision, Nice, France. https://doi.org/10.1109/ICCV.2003.1238378Test; Laptev, I. (2005). On space-time interest points. International Journal of Computer Vision, 64, 107-123. https://doi.org/10.1007/s11263-005-1838-7Test; Lawrence, E., Sax, C., Navarro, K. F., & Qiao, M. (2010, February 10-16). Interactive games to improve quality of life for the elderly: Towards integration into a WSN monitoring system [Conference presentation]. 2010 Second International Conference on EHealth, Telemedicine, and Social Medicine, Saint Marteen, Netherlands Antilles. https://doi.org/10.1109/eTELEMED.2010.21Test; Li, H., Shrestha, A., Fioranelli, F., Kernec, J. Le, & Heidari, H. (2018, October 28-31). Hierarchical classification on multimodal sensing for human activity recogintion and fall detection [Conference presentation]. 2018 IEEE SENSORS, New Delhi, India. https://doi.org/10.1109/ICSENS.2018.8589797Test; Li, X., Pang, T., Liu, W., & Wang, T. (2017, October 14-16). Fall detection for elderly person care using convolutional neural networks [Conference presentation]. 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China. https://doi.org/10.1109/CISP-BMEI.2017.8302004Test; Liu, Y., Li, X., & Jia, L. (2014, June 29 - July 4). Abnormal crowd behavior detection based on optical flow and dynamic threshold [Conference presentation]. 11th World Congress on Intelligent Control and Automation, Shenyang, China. https://doi.org/10.1109/WCICA.2014.7053189Test; Lohit, S., Bansal, A., Shroff, N., Pillai, J., Turaga, P., & Chellappa, R. (2018, June 18-22). Predicting dynamical evolution of human activities from a single image [Conference presentation]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA. https://doi.org/10.1109/CVPRW.2018.00079Test; Lu, N., Ren, X., Song, J., & Wu, Y. (2017, August 20-23). Visual guided deep learning scheme for fall detection [Conference presentation]. 2017 13th IEEE Conference on Automation Science and Engineering (CASE), Xi'an, China. https://doi.org/10.1109/COASE.2017.8256202Test; Ma, C., Shimada, A., Uchiyama, H., Nagahara, H., & Taniguchi, R. (2019). Fall detection using optical level anonymous image sensing system. Optics & Laser Technology, 110, 44-61. https://doi.org/10.1016/j.optlastec.2018.07.013Test; Ma, X., Wang, H., Xue, B., Zhou, M., Ji, B., & Li, Y. (2014). Depth-based human fall detection via shape features and improved extreme learning machine. IEEE Journal of Biomedical and Health Informatics, 18(6), 1915-1922. https://doi.org/10.1109/JBHI.2014.2304357Test; Martínez-Villaseñor, L., Ponce, H., Brieva, J., Moya-Albor, E., Núñez-Martínez, J., & Peñafort-Asturiano, C. (2019). UP-Fall detection dataset: A multimodal approach. Sensors, 19(9), 1988. https://doi.org/10.3390/s19091988Test; Mastorakis, G., & Makris, D. (2014). Fall detection system using Kinect’s infrared sensor. Journal of Real-Time Image Processing, 9(4), 635-646. https://doi.org/10.1007/s11554-012-0246-9Test; Nguyen, T. V., Song, Z., & Yan, S. (2015). STAP: Spatial-Temporal Attention-Aware Pooling for action recognition. IEEE Transactions on Circuits and Systems for Video Technology, 25(1), 77-86. https://doi.org/10.1109/TCSVT.2014.2333151Test; Nguyen, V. A., Le, T. H., & Nguyen, T. T. (2016). Single camera based fall detection using motion and human shape features. In ACM (Eds.), Proceedings of the Seventh Symposium on Information and Communication Technology - SoICT ’16. (pp. 339-344) ACM Press. https://doi.org/10.1145/3011077.3011103Test; Ni, B., Pei, Y., Moulin, P., & Yan, S. (2013). Multilevel depth and image fusion for human activity detection. IEEE Transactions on Cybernetics, 43(5), 1383-1394. https://doi.org/10.1109/TCYB.2013.2276433Test; Nivia-Vargas, A. M., & Jaramillo-Jaramillo, I. (2018). La industria de sensores en Colombia. Tecnura, 22(57), 44-54. https://doi.org/10.14483/22487638.13518Test; Nizam, Y., Mohd, M. N. H., & Jamil, M. M. A. (2017). Human fall detection from depth images using position and velocity of subject. Procedia Computer Science, 105, 131-137. https://doi.org/10.1016/j.procs.2017.01.191Test; Núñez-Marcos, A., Azkune, G., & Arganda-Carreras, I. (2017). Vision-based fall detection with convolutional neural networks. Wireless Communications and Mobile Computing, 2017, 9474806. https://doi.org/10.1155/2017/9474806Test; OMS (WHO) (2015). Datos interesantes acerca del envejecimiento. http://www.who.int/ageing/about/facts/esTest/; Panahi, L., & Ghods, V. (2018). Human fall detection using machine vision techniques on RGB-D images. Biomedical Signal Processing and Control, 44, 146-153. https://doi.org/10.1016/j.bspc.2018.04.014Test; Pava, R., Pérez-Castillo, J. N., & Niño-Vásquez, L. F. (2021). Perspectiva para el uso del modelo P6 de atención en salud bajo un escenario soportado en IoT y blockchain. Tecnura, 25(67), 112-130. https://doi.org/10.14483/22487638.16159Test; Pazhoumand-Dar, H., Lam, C.-P., & Masek, M. (2015). Joint movement similarities for robust 3D action recognition using skeletal data. Journal of Visual Communication and Image Representation, 30, 10-21. https://doi.org/10.1016/j.jvcir.2015.03.002Test; Peng, X., Wang, L., Wang, X., & Qiao, Y. (2016). Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. Computer Vision and Image Understanding, 150, 109-125. https://doi.org/10.1016/j.cviu.2016.03.013Test; Planinc, R., & Kampel, M. (2013). Introducing the use of depth data for fall detection. Personal and Ubiquitous Computing, 17(6), 1063-1072. https://doi.org/10.1007/s00779-012-0552-zTest; Preis, J., Kessel, M., Werner, M., & Linnhoff-Popien, C. (2012). Gait Recognition with Kinect. https://www.researchgate.net/publication/239862819_Gait_Recognition_with_Kinect/citationsTest; Rafferty, J., Nugent, C. D., Liu, J., & Chen, L. (2017). From activity recognition to intention recognition for assisted living within smart homes. IEEE Transactions on Human-Machine Systems, 47(3), 368-379. https://doi.org/10.1109/THMS.2016.2641388Test; Rahnemoonfar, M., & Alkittawi, H. (2018, December 10-13). Spatio-temporal convolutional neural network for elderly fall detection in depth video cameras [Conference presentation]. 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA. https://doi.org/10.1109/BigData.2018.8622342Test; Rosati, S., Balestra, G., & Knaflitz, M. (2018). Comparison of different sets of features for human activity recognition by wearable sensors. Sensors, 18(12), 4189. https://doi.org/10.3390/s18124189Test; Rougier, C., Meunier, J., St-Arnaud, A., & Rousseau, J. (2007, May 21-23). Fall detection from human shape and motion history using video surveillance [Conference presentation]. 21st International Conference on Advanced Information Networking and Applications Workshops (AINAW’07), Niagara Falls, ON, Canada. https://doi.org/10.1109/AINAW.2007.181Test; Ryoo, M. S. (2011, November 6-13). Human activity prediction: Early recognition of ongoing activities from streaming videos [Conference presentation]. 2011 International Conference on Computer Vision, Barcelona, Spain. https://doi.org/10.1109/ICCV.2011.6126349Test; Saini, R., Kumar, P., Roy, P. P., & Dogra, D. P. (2018). A novel framework of continuous human-activity recognition using Kinect. Neurocomputing, 311, 99-111. https://doi.org/10.1016/j.neucom.2018.05.042Test; Sazonov, E., Metcalfe, K., Lopez-Meyer, P., & Tiffany, S. (2011, November 28 - December 1). RF hand gesture sensor for monitoring of cigarette smoking [Conference presentation]. 2011 Fifth International Conference on Sensing Technology, Palmerson North, New Zealand. https://doi.org/10.1109/ICSensT.2011.6137014Test; Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., & Blake, A. (2011, June 20-25). Real-time human pose recognition in parts from single depth images [Conference presentation]. CVPR 2011, Colorado Springs, CO, USA. https://doi.org/10.1109/CVPR.2011.5995316Test; Soomro, K., Roshan, A., & Shah, M. (2012). UCF101: A Dataset of 101 human actions classes from videos in the wild. arXiv preprint. https://doi.org/10.48550/arXiv.1212.0402Test; Sreenidhi, I. (2020). Real-time human fall detection and emotion recognition using embedded device and deep learning. International Journal of Emerging Trends in Engineering Research, 8(3), 780-786. https://doi.org/10.30534/ijeter/2020/28832020Test; Suto, J., & Oniga, S. (2019). Efficiency investigation from shallow to deep neural network techniques in human activity recognition. Cognitive Systems Research, 54, 37-49. https://doi.org/10.1016/j.cogsys.2018.11.009Test; Uzunovic, T., Golubovic, E., Tucakovic, Z., Acikmese, Y., & Sabanovic, A. (2018, October 21-23). Task-based control and human activity recognition for human-robot collaboration [Conference presentation]. IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society, Washington DC, USA. https://doi.org/10.1109/IECON.2018.8591206Test; Venkatesha, S., & Turk, M. (2010, August 23-26). Human activity recognition using local shape descriptors [Conference presentation]. 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey. https://doi.org/10.1109/ICPR.2010.902Test; Vrigkas, M., Nikou, C., & Kakadiaris, I. A. (2015). A review of human activity recognition methods. Frontiers in Robotics and AI, 2, 28. https://doi.org/10.3389/frobt.2015.00028Test; Wang, L., Qiao, Y., & Tang, X. (2015, June 7-12). Action recognition with trajectory-pooled deep-convolutional descriptors [Conference presentation]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA.; Xu, Q., Huang, G., Yu, M., & Guo, Y. (2020). Fall prediction based on key points of human bones. Physica A: Statistical Mechanics and Its Applications, 540, 123205. https://doi.org/10.1016/j.physa.2019.123205Test; Yan, S., Xiong, Y., & Lin, D. (2018). Spatial temporal graph convolutional networks for skeleton-based action recognition. Computer Vision and Pattern Recognition, 32(1). 12328. https://doi.org/10.1609/aaai.v32i1.12328Test; Yang, L., Ren, Y., & Zhang, W. (2016). 3D depth image analysis for indoor fall detection of elderly people. Digital Communications and Networks, 2(1), 24-34. https://doi.org/10.1016/j.dcan.2015.12.001Test; Yang, X., & Tian, Y. (2014). Effective 3D action recognition using EigenJoints. Journal of Visual Communication and Image Representation, 25(1), 2-11. https://doi.org/10.1016/j.jvcir.2013.03.001Test; Yang, Y., Hou, C., Lang, Y., Guan, D., Huang, D., & Xu, J. (2019). Open-set human activity recognition based on micro-Doppler signatures. Pattern Recognition, 85, 60-69. https://doi.org/10.1016/j.patcog.2018.07.030Test; Yao, L., Min, W., & Lu, K. (2017). A new approach to fall detection based on the human torso motion model. Applied Sciences, 7(10), 993. https://doi.org/10.3390/app7100993Test; Yong Du, Wang, W., & Wang, L. (2015, June 7-12). Hierarchical recurrent neural network for skeleton based action recognition [Conference presentation]. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA. https://doi.org/10.1109/CVPR.2015.7298714Test; Yu, M., Naqvi, S. M., Rhuma, A., & Chambers, J. (2012). One class boundary method classifiers for application in a video-based fall detection system. IET Computer Vision, 6(2), 90-100. https://doi.org/10.1049/iet-cvi.2011.0046Test; Yu, M., Yu, Y., Rhuma, A., Naqvi, S. M. R., Wang, L., & Chambers, J. A. (2013). An online one class support vector machine-based person-specific fall detection system for monitoring an elderly individual in a room environment. IEEE Journal of Biomedical and Health Informatics, 17(6), 1002-1014. https://doi.org/10.1109/JBHI.2013.2274479Test; Zhang, H.-B., Zhang, Y.-X., Zhong, B., Lei, Q., Yang, L., Du, J.-X., & Chen, D.-S. (2019). A comprehensive survey of vision-based human action recognition methods. Sensors, 19(5), 1005. https://doi.org/10.3390/s19051005Test; Zhang, S., Wei, Z., Nie, J., Huang, L., Wang, S., & Li, Z. (2017). A review on human activity recognition using vision-based method. Journal of Healthcare Engineering, 2017, 3090343. https://doi.org/10.1155/2017/3090343Test; Zhu, Y., Zhao, X., Fu, Y., & Liu, Y. (2011). Sparse coding on local spatial-temporal volumes for human action recognition. In R. Kimmel, R. Klette, & A. Sugimoto (Eds.), Computer Vision - ACCV 2010 (pp. 660-671). Springer. https://doi.org/10.1007/978-3-642-19309-5_51Test; https://revistas.udistrital.edu.co/index.php/Tecnura/article/view/17413Test

  7. 7
    دورية أكاديمية
  8. 8
    دورية أكاديمية
  9. 9
    دورية أكاديمية

    المؤلفون: Rojas Bez, José Rafael

    المصدر: Antrópica. Revista de Ciencias Sociales y Humanidades; Vol. 7 Núm. 14 (2021): Número 14 Julio-Diciembre 2021; 121-147 ; Antropica. Journal of Social Sciences and Humanities; Vol. 7 No. 14 (2021): Número 14 Julio-Diciembre 2021; 121-147 ; 2448-5241

    الوقت: Tema: El universo audiovisual humano. Antropología, sociología del arte y la cultura, estética. Texto sobre eluniverso audiovisual de interés para amplio sector de investigadores, profesores, artistas, atnólogos, sociólogos y antropólogos de la visuali, Topic: The human audiovisual universe. Anthropology, sociology of art and culture, aesthetics. Text about the audiovisual universe of interest to a wide sector of researchers, teachers, artists, atnologists, sociologists and anthropologists

    وصف الملف: application/pdf

  10. 10
    مؤتمر

    المؤلفون: Sánchez Daza, Jesús Eduardo

    المساهمون: Sánchez Daza, Jesús Eduardo 0001704627, Sánchez Daza, Jesús Eduardo 0009-0000-2302-5284, Semilleros de Investigación UNAB

    المصدر: Sánchez, J. E. (2019). Análisis del impacto del factor climático en la demanda energética colombiana. Recuperado de: http://hdl.handle.net/20.500.12749/22316Test

    الوقت: 2019

    وصف الملف: application/pdf

    العلاقة: Generación Creativa : Encuentro de Semilleros de Investigación UNAB; http://hdl.handle.net/20.500.12749/14243Test; [1] CONVENCIÓN MARCO DE LAS NACIONES UNIDAS SOBRE EL CAMBIO CLIM£TICO. (1994). Sacado de: https://unfccc.int/resource/docs/convkp/convsp.pdfTest; 2] WWF, Cambia la Energía, Cambia el Clima. (2019)., Sacado de: https://d2ouvy59p0dg6k.cloudfront.net/downloads/wwf_cTest ambialaenergia_4.pdf; [3] Panorama energético de Colombia. (2019). Sacado de: https://www.grupobancolombia.com/wps/portal/empresasTest/ capital-inteligente/actualidad-economicasectorial/especiales/especial-energia-2019/panomaraenergetico-colombia; [4] UPME. (2015). PLAN ENERGETICO NACIONAL COLOMBIA: IDEARIO ENERGÉTICO 2050 [Ebook]. Colombia. Sacado de: http://www1.upme.gov.co/Documents/PEN_IdearioEnergeTest tico2050.pdf; [5] Upme. (2016): Sector minero-energetico para la adaptacion al cambio climatico. Sacado de: http://www1.upme.gov.co/PromocionSector/DocumentsTest/ Memorias%20dia%20UPME/Adaptacion_Cambio_Climat ico.pdf; [6] CLIMA - IDEAM. (2011)., Sacado de: http://www.ideam.gov.co/web/tiempo-y-clima/climaTest; [7] UPME. (2016). ESTUDIO DE GENERACIÓN ELÉCTRICA BAJO ESCENARIO DE CAMBIO CLIMATICO [Ebook] (1st ed.). Colombia. Sacado de: http://www1.upme.gov.co/Documents/generacion_electricTest a_bajo_escenarios_cambio_climatico.pdf; [8] POLÍTICA NACIONAL DE CAMBIO CLIMÁTICO. (2017). [Ebook] (1st ed.). Bogóta. Sacado de: https://colaboracion.dnp.gov.co/CDT/Conpes/Econ%C3Test% B3micos/3700.pdf; [9] Rueda, V. M., HENAO, J. D. V., & CARDONA, C. J. F. (2011). Avances recientes en la predicción de la demanda de electricidad usando modelos no lineales. Dyna, 78(167), 36-43.; [10] Cardona, C. J. F., Henao, J. D. V., & Morales, Y. O. (2008). Caracterización de la demanda mensual de electricidad en Colombia usando un modelo de componentes no observables. Cuadernos de Administración, 21(36), 221- 235.; [11] plan de expansión de referencia generación – transmisión 2017 – 2031. (2017). Ministerio de Minas y Energía- MMEUnidad de Planeación Minero-Energética -UPME-. Disponible en: http://www1.upme.gov.co/Energia_electrica/Plan_GT_201Test 7_2031_PREL.pdf; http://hdl.handle.net/20.500.12749/22316Test; instname:Universidad Autónoma de Bucaramanga - UNAB; reponame:Repositorio Institucional UNAB; repourl:https://repository.unab.edu.coTest