يعرض 1 - 10 نتائج من 670 نتيجة بحث عن '"Bipedal locomotion"', وقت الاستعلام: 1.53s تنقيح النتائج
  1. 1
    دورية أكاديمية
  2. 2
    دورية أكاديمية
  3. 3
    دورية أكاديمية
  4. 4
    دورية أكاديمية
  5. 5
    دورية أكاديمية
  6. 6
    دورية أكاديمية
  7. 7
    مؤتمر

    المساهمون: Équipe Mouvement des Systèmes Anthropomorphes (LAAS-GEPETTO), Laboratoire d'analyse et d'architecture des systèmes (LAAS), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT), Augmenting human comfort in the factory using cobots (AUCTUS), Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut Polytechnique de Bordeaux (Bordeaux INP), ANR-19-P3IA-0004,ANITI,Artificial and Natural Intelligence Toulouse Institute(2019), ANR-10-EQPX-0044,ROBOTEX,Réseau national de plateformes robotiques d'excellence(2010), ANR-21-ESRE-0015,TIRREX,Infrastructure technologique pour la recherche d'excellence en robotique(2021), ANR-21-LCV3-0002,Dynamo-Grade,Dynamograde - La force de la marche(2021), ANR-21-CE10-0001,ASAP-HRC,Repenser l'Autonomie en Collaboration Homme-Robot, vers un partage de l'Action et de la Perception(2021)

    المصدر: IEEE-RAS International Conference on Humanoid Robots 2023 ; https://hal.science/hal-04191553Test ; IEEE-RAS International Conference on Humanoid Robots 2023, Dec 2023, Austin (Texas), United States. ⟨10.1109/Humanoids57100.2023.10375224⟩ ; https://ieeexplore.ieee.org/abstract/document/10375224Test

    جغرافية الموضوع: Austin (Texas), United States

  8. 8
    رسالة جامعية

    المؤلفون: Yanguas Rojas, David Reinerio

    المساهمون: Mojica Nava, Eduardo Alirio, Programa de Investigacion sobre Adquisicion y Analisis de Señales Paas-Un, orcid:0000-0001-5874-721X, Yanguas Rojas, David, David R. Yanguas Rojas, David Yanguas-Rojas

    وصف الملف: xxi, 158 páginas; application/pdf

    العلاقة: [Arcos-Legarda et al., 2019] Arcos-Legarda, J., Cortes-Romero, J., Beltran-Pulido, A., and Tovar, A. (2019). Hybrid disturbance rejection control of dynamic bipedal robots. Multibody System Dynamics, 46:281–306.; [Belongie, 2023] Belongie, S. (2023). Rodrigues’ rotation formula. Created by Eric W. Weisstein.; [Burda et al., 2018] Burda, Y., Edwards, H., Pathak, D., Storkey, A., Darrell, T., and Efros, A. A. (2018). Large-Scale Study of Curiosity-Driven Learning. ArXiv.; [Carnegie Mellon University Graphics Lab, 2004] Carnegie Mellon University Graphics Lab (2004). CMU Motion Capture Database. The database was created with funding from NSF EIA-0196217.; [Chebotar et al., 2018] Chebotar, Y., Handa, A., Makoviychuk, V., Macklin, M., Issac, J., Ratliff, N., and Fox, D. (2018). Closing the sim-to-real loop: Adapting simulation randomization with real world experience.; [Chernova and Thomaz, 2014] Chernova, S. and Thomaz, A. L. (2014). Robot Learning from Human Teachers. Synthesis Lectures on Artificial Intelligence and Machine Learning, 8(3):1–121.; [Chernova and Veloso, 2010] Chernova, S. and Veloso, M. (2010). Confidence-based multirobot learning from demonstration. International Journal of Social Robotics, 2(2):195–215.; [Chevallereau et al., 2014] Chevallereau, C., Sinnet, R. W., Ames, A. D., and Universit´e, L. (2014). Models feedback control and open problems of 3d bipedal robotic walking. Automatica.; [Collins et al., 2005] Collins, S., Ruina, A., Tedrake, R., and Wisse, M. (2005). Efficient bipedal robots based on passive-dynamic walkers. Science, 307:1082–1085.; [Ding et al., 2019] Ding, J., Zhou, C., and Xiao, X. (2019). Energy-Efficient Bipedal Gait Pattern Generation via CoM Acceleration Optimization. IEEE-RAS International Conference on Humanoid Robots, 2018-Novem:238–244.; [Duan et al., 2017] Duan, Y., Andrychowicz, M., Stadie, B. C., Ho, J., Schneider, J., Sutskever, I., Abbeel, P., and Zaremba, W. (2017). One-Shot Imitation Learning. ArXiv.; [Duan et al., 2016] Duan, Y., Chen, X., Houthooft, R., Schulman, J., and Abbeel, P. (2016). Benchmarking Deep Reinforcement Learning for Continuous Control. ArXiv.; [Erez et al., 2015] Erez, T., Lowrey, K., Tassa, Y., Kumar, V., Kolev, S., and Todorov, E. (2015). An integrated system for real-time model predictive control of humanoid robots. Proceedings on IEEE-RAS International Conference on Humanoid Robots, 2015- Febru(February):292–299.; [García et al., 1989] Garc´ıa, C. E., Prett, D. M., and Morari, M. (1989). Model predictive control: Theory and practice—a survey. Automatica, 25(3):335–348.; [Google, 2015] Google (2015). AlphaGo — DeepMind. https://deepmind.com/researchTest/ alphago/. accessed 20-May-2019.; [Heess et al., 2017] Heess, N., TB, D., Sriram, S., Lemmon, J., Merel, J., Wayne, G., Tassa, Y., Erez, T., Wang, Z., Eslami, S. M. A., Riedmiller, M., and Silver, D. (2017). Emergence of locomotion behaviours in rich environments.; [Ijspeert, 2008] Ijspeert, A. J. (2008). Central pattern generators for locomotion control in animals and robots: A review. Neural Networks, 21(4):642–653.; [Jumper et al., 2021] Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., ˇ Z´ıdek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman, D., Clancy, E., Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A. W., Kavukcuoglu, K., Kohli, P., and Hassabis, D. (2021). Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583 – 589. Cited by: 8028; All Open Access, Green Open Access, Hybrid Gold Open Access.; [Kingma and Ba, 2015] Kingma, D. P. and Ba, J. L. (2015). Adam: A method for stochastic optimization.; [Kobayashi et al., 2018] Kobayashi, T., Sekiyama, K., Hasegawa, Y., Aoyama, T., and Fukuda, T. (2018). Virtual-dynamics-based reference gait speed generator for limit-cycle-based bipedal gait. ROBOMECH Journal, 5(1).; [Koenemann et al., 2014] Koenemann, J., Burget, F., and Bennewitz, M. (2014). Real-time imitation of human whole-body motions by humanoids. Proceedings - IEEE International Conference on Robotics and Automation, pages 2806–2812.; [Koos et al., 2013] Koos, S., Mouret, J. B., and Doncieux, S. (2013). The transferability approach: Crossing the reality gap in evolutionary robotics. IEEE Transactions on Evolutionary Computation, 17(1):122–145.; [Kuindersma et al., 2016] Kuindersma, S., Deits, R., Fallon, M., Valenzuela, A., Dai, H., Permenter, F., Koolen, T., Marion, P., and Tedrake, R. (2016). Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot. Autonomous Robots, 40:429–455.; [Lee et al., 2019] Lee, K., Kim, S., Lim, S., Choi, S., and Oh, S. (2019). Tsallis Reinforcement Learning: A Unified Framework for Maximum Entropy Reinforcement Learning. ArXiv.; [Lillicrap et al., 2015] Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2015). Continuous control with deep reinforcement learning. ArXiv.; [Lillicrap et al., 2019] Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2019). Continuous control with deep reinforcement learning.; [Liu et al., 2022] Liu, S., Lever, G., Wang, Z., Merel, J., Eslami, S., Hennes, D., Czarnecki, W., Tassa, Y., Omidshafiei, S., Abdolmaleki, A., Graepel, T., and Heess, N. (2022). From motor control to team play in simulated humanoid football. Science Robotics, 7.; [Loudon et al., 2008] Loudon, J. K. J. K., Swift, M., and Bell, S. (2008). The clinical orthopedic assessment guide. Human Kinetics.; [Luenberger and Ye, 2021] Luenberger, D. G. and Ye, Y. (2021). Penalty and barrier methods. International Series in Operations Research and Management Science, 228.; [Ma et al., 2021] Ma, L.-K., Yang, Z., Xin, T., Guo, B., and Yin, K. (2021). Learning and exploring motor skills with spacetime bounds. Computer Graphics Forum, 40(2):251–263.; [Ma et al., 2018] Ma, W.-L., Or, Y., and Ames, A. D. (2018). Dynamic Walking on Slippery Surfaces: Demonstrating Stable Bipedal Gaits with Planned Ground Slippage. Proceedings on 2019 International Conference on Robotics and Automation (ICRA).; [Mania et al., 2018] Mania, H., Guy, A., and Recht, B. (2018). Simple random search provides a competitive approach to reinforcement learning. Advances in Neural Information Processing Systems 31 (NeurIPS 2018).; [Mark W. Spong, 2020] Mark W. Spong, Seth Hutchinson, M. V. (2020). Robot modeling and control. JOHN WILEY and SONS, INC.; [Merel et al., 2017] Merel, J., Tassa, Y., TB, D., Srinivasan, S., Lemmon, J., Wang, Z., Wayne, G., and Heess, N. (2017). Learning human behaviors from motion capture by adversarial imitation. eprint arXiv:1707.02201.; [Mnih et al., 2016] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., and Kavukcuoglu, K. (2016). Asynchronous Methods for Deep Reinforcement Learning.; [Mnih et al., 2015] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529 – 533. Cited by: 15639.; [Müller et al., 2022] Müller, M., Jazdi, N., and Weyrich, M. (2022). Self-improving models for the intelligent digital twin: Towards closing the reality-to-simulation gap. IFACPapersOnLine, 55:126–131.; Nehaniv and Dautenhahn, 2002] Nehaniv, C. L. and Dautenhahn, K. (2002). The Correspondence Problem, page 41–61. MIT Press, Cambridge, MA, USA.; [Nguyen and La, 2019] Nguyen, H. and La, H. (2019). Review of Deep Reinforcement Learning for Robot Manipulation. Proceedings - 3rd IEEE International Conference on Robotic Computing, IRC 2019, pages 590–595.; [Niu et al., 2022] Niu, J., Hu, Y., Li, W., Huang, G., Han, Y., and Li, X. (2022). Closing the dynamics gap via adversarial and reinforcement learning for high-speed racing. Proceedings of the International Joint Conference on Neural Networks, 2022-July.; [OpenAI, 2018] OpenAI (2018). Openai five. https://blog.openai.com/openai-fiveTest/. accessed 20-May-2019.; [Recht, 2018] Recht, B. (2018). A Tour of Reinforcement Learning: The View from Continuous Control. eprint arXiv:1806.09460.; [Robbins and Monro, 1951] Robbins, H. and Monro, S. (1951). A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400 – 407.; [ROBOTIS, 2022a] ROBOTIS (2022a). Robotis engineer kit 1 e-manual (visited on 2022/04/01).; [ROBOTIS, 2022b] ROBOTIS (2022b). Robotis Engineer Kit 2 E-Manual, (visited on 2022/04/01).; [ROBOTIS, 2022c] ROBOTIS (2022c). Robotis mini e-manual (visited on 2022/04/01).; [ROBOTIS, 2022d] ROBOTIS (2022d). Robotis xl-320 e-manual (visited on 2022/04/01).; [Rodriguez et al., 2018] Rodriguez, D., Brandenburger, A., and Behnke, S. (2018). Combining Simulations and Real-robot Experiments for Bayesian Optimization of Bipedal Gait Stabilization. Lecture Notes in Computer Science book series.; [Rosolia and Borrelli, 2016] Rosolia, U. and Borrelli, F. (2016). Learning Model Predictive Control for Iterative Tasks. eprint arXiv:1609.01387, 4.; [Salvato et al., 2021] Salvato, E., Fenu, G., Medvet, E., and Pellegrino, F. A. (2021). Crossing the reality gap: A survey on sim-to-real transferability of robot controllers in reinforcement learning. IEEE Access, 9:153171–153187.; [Schulman et al., 2015] Schulman, J., Levine, S., Moritz, P., Jordan, M. I., and Abbeel, P. (2015). Trust Region Policy Optimization. Proceedings of the 32nd International Conference on Machine Learning.; [Siciliano and Khatib, 2016] Siciliano, B. and Khatib, O. (2016). Springer handbook of robotics. Springer eBooks.; [Siegwart et al., 2011] Siegwart, R., Nourbakhsh, I. R., and Scaramuzza, D. (2011). Introduction to autonomous mobile robots. Intelligent robotics and autonomous agents. The MIT Press/Massachusetts Institute of Technology.; [Simba et al., 2016] Simba, K. R., Uchiyama, N., and Sano, S. (2016). Real-time smooth trajectory generation for nonholonomic mobile robots using b´ezier curves. Robotics and Computer-Integrated Manufacturing, 41.; [Singh et al., 2019] Singh, A., Yang, L., Hartikainen, K., Finn, C., and Levine, S. (2019). End-to-End Robotic Reinforcement Learning without Reward Engineering. Robotics: Science and Systems.; [Sutton and Barto, 2008] Sutton, R. S. and Barto, A. G. (2008). Reinforcement Learning. The MIT Press, London, England, second edi edition.; [´Swiechowski et al., 2022] ´Swiechowski, M., Godlewski, K., Sawicki, B., and Ma´ndziuk, J. (2022). Monte carlo tree search: a review of recent modifications and applications. Artificial Intelligence Review, 56(3):2497–2562.; [Tan et al., 2018] Tan, J., Zhang, T., Coumans, E., Iscen, A., Bai, Y., Hafner, D., Bohez, S., and Vanhoucke, V. (2018). Sim-to-Real: Learning Agile Locomotion For Quadruped Robots. eprint arXiv:1804.10332.; [Tedrake, 2022] Tedrake, R. (2022). Underactuated Robotics. MIT.; [Tevatia and Schaal, 2000] Tevatia, G. and Schaal, S. (2000). Inverse kinematics for humanoid robots. Proceedings-IEEE International Conference on Robotics and Automation, 1(April):294–299.; [Thorp, 2023] Thorp, H. H. (2023). Chatgpt is fun, but not an author. Science, 379(6630):313; All Open Access, Bronze Open Access.; [Tobin et al., 2017] Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. IEEE International Conference on Intelligent Robots and Systems, 2017-Septe:23–30.; [Todorov, 2019] Todorov, E. (2019). Mujoco: Modeling, simulation and visualization of multi-joint dynamics with contact. http://www.mujoco.org/book/index.htmlTest. accessed 20-May-2019.; [Uchibe, 2018] Uchibe, E. (2018). Cooperative and competitive reinforcement and imitation learning for a mixture of heterogeneous learning modules. Frontiers in Neurorobotics, 12(SEP).; [Vinyals et al., 2019] Vinyals, O., Babuschkin, I., Chung, J., Mathieu, M., Jaderberg, M., Czarnecki, W. M., Dudzik, A., Huang, A., Georgiev, P., Powell, R., Ewalds, T., Horgan, D., Kroiss, M., Danihelka, I., Agapiou, J., Oh, J., Dalibard, V., Choi, D., Sifre, L., Sulsky, Y., Vezhnevets, S., Molloy, J., Cai, T., Budden, D., Paine, T., Gulcehre, C., Wang, Z., Pfaff, T., Pohlen, T., Wu, Y., Yogatama, D., Cohen, J., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Apps, C., Kavukcuoglu, K., Hassabis, D., and Silver, D. (2019). AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.comTest/ blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/{}. accessed 20-May-2019.; [Vukobratovic and Borovac, 2004] Vukobratovic, M. and Borovac, B. (2004). Zero-moment point — thirty five years of its life. International Journal of Humanoid Robotics, 01:157– 173.; [Vukobratovic and Juricic, 1969] Vukobratovic, M. and Juricic, D. (1969). Contribution to the synthesis of biped gait. IEEE Transactions on Biomedical Engineering, BME-16(1):1– 6.; [Wampler, 1986] Wampler, C. W. (1986). Manipulator inverse kinematic solutions based on vector formulations and damped least-squares methods. IEEE Transactions on Systems, Man and Cybernetics, 16.; [Watkins and Dayan, 1992] Watkins, C. J. C. H. and Dayan, P. (1992). Q-learning. Machine Learning, 8(3):279–292.; [Westervelt et al., 2003] Westervelt, E. R., Grizzle, J. W., and Koditschek, D. E. (2003). Hybrid zero dynamics of planar biped walkers. IEEE Transactions on Automatic Control, 48:42–56.; [Xie et al., 2019] Xie, Z., Clary, P., Dao, J., Morais, P., Hurst, J., and van de Panne, M. (2019). Iterative reinforcement learning based design of dynamic locomotion skills for cassie. 3rd Conference on Robot Learning (CoRL 2019), Osaka, Japan.; [Xie et al., 2020] Xie, Z., Ling, H. Y., Kim, N. H., and Panne, M. V. D. (2020). Allsteps: Curriculum-driven learning of stepping stone skills. ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2020, pages 213–224.; [Zou et al., 2019] Zou, F., Shen, L., Jie, Z., Zhang, W., and Liu, W. (2019). A sufficient condition for convergences of adam and rmsprop. volume 2019-June.; https://repositorio.unal.edu.co/handle/unal/85427Test; Universidad Nacional de Colombia; Repositorio Institucional Universidad Nacional de Colombia; https://repositorio.unal.edu.coTest/

  9. 9
    دورية أكاديمية
  10. 10
    دورية أكاديمية

    المصدر: Grunstra , N D S , Betti , L , Fischer , B , Haeusler , M , Pavlicev , M , Stansfield , E , Trevathan , W , Webb , N M , Wells , J C K , Rosenberg , K R & Mitteroecker , P 2023 , ' There is an obstetrical dilemma: Misconceptions about the evolution of human childbirth and pelvic form ' , American Journal of Biological Anthropology . < https://onlinelibrary.wiley.com/doi/10.1002/ajpa.24802Test >