arxiv_robotics's People
arxiv_robotics's Issues
π§ 2020: Transferable Task Execution from Pixels through Deep Planning Domain Learning
Transferable Task Execution from Pixels through Deep Planning Domain Learning
Kei Kase, Chris Paxton, Hammad Mazhar, Tetsuya Ogata, Dieter Fox
7 pages, 6 figures. Conference paper accepted in International conference on Robotics and Automation (ICRA) 2020
https://arxiv.org/abs/2003.03726
π§ 2021: Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata
https://arxiv.org/abs/2112.06442
π§ 2021: Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration
Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration
Edward Johns
Published at ICRA 2021. Webpage and video: this https URL
https://arxiv.org/abs/2105.06411
π§ 2021: Learning Multi-Stage Tasks with One Demonstration via Self-Replay
Learning Multi-Stage Tasks with One Demonstration via Self-Replay
Norman Di Palo, Edward Johns
Published at the 5th Conference on Robot Learning (CoRL) 2021
https://arxiv.org/abs/2111.07447
π§ 2022: Inertial Hallucinations -- When Wearable Inertial Devices Start Seeing Things
Inertial Hallucinations -- When Wearable Inertial Devices Start Seeing Things
Alessandro Masullo, Toby Perrett, Tilo Burghardt, Ian Craddock, Dima Damen, Majid Mirmehdi
https://arxiv.org/abs/2207.06789
π§ 2017: Deep Predictive Policy Training using Reinforcement Learning
Deep Predictive Policy Training using Reinforcement Learning
Ali Ghadirzadeh, Atsuto Maki, Danica Kragic, MΓ₯rten BjΓΆrkman
This work is submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems 2017 (IROS2017)
https://arxiv.org/abs/1703.00727
π§ 2022: Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning
Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning
Eugene Valassakis, Georgios Papagiannis, Norman Di Palo, Edward Johns
To be published at IROS 2022. 7 figures, 8 pages. Videos and supplementary material are available at: this https URL
https://arxiv.org/abs/2204.02863
π§ 2020: Compensation for undefined behaviors during robot task execution by switching controllers depending on embedded dynamics in RNN
Compensation for undefined behaviors during robot task execution by switching controllers depending on embedded dynamics in RNN
Kanata Suzuki, Hiroki Mori, Tetsuya Ogata
To appear in IEEE Robotics and Automation Letters (RA-L) and IEEE International Conference on Robotics and Automation (ICRA 2021)
https://arxiv.org/abs/2003.04862
π§ 2022: Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space
Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space
Jinghuan Shang, Srijan Das, Michael S. Ryoo
Pre-print. 20 pages
https://arxiv.org/abs/2206.11895
π§ 2022: FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms
FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms
Jianhao Jiao, Hexiang Wei, Tianshuai Hu, Xiangcheng Hu, Yilong Zhu, Zhijian He, Jin Wu, Jingwen Yu, Xupeng Xie, Huaiyang Huang, Ruoyu Geng, Lujia Wang, Ming Liu
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022, 6 pages, 6 figures. URL: this https URL
https://arxiv.org/abs/2208.11865
π§ 2018: Task-Embedded Control Networks for Few-Shot Imitation Learning
Task-Embedded Control Networks for Few-Shot Imitation Learning
Stephen James, Michael Bloesch, Andrew J. Davison
Published at the Conference on Robot Learning (CoRL) 2018
https://arxiv.org/abs/1810.03237
π§ 2022: Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use
Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, Tetsuya Ogata
https://arxiv.org/abs/2206.14530
π§ 2020: SAFARI: Safe and Active Robot Imitation Learning with Imagination
SAFARI: Safe and Active Robot Imitation Learning with Imagination
Norman Di Palo, Edward Johns
https://arxiv.org/abs/2011.09586
π§ 2017: One-Shot Visual Imitation Learning via Meta-Learning
One-Shot Visual Imitation Learning via Meta-Learning
Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, Sergey Levine
Conference on Robot Learning, 2017 (to appear). First two authors contributed equally. Video available at this https URL
https://arxiv.org/abs/1709.04905
π§ 2021: Spatial Attention Point Network for Deep-learning-based Robust Autonomous Robot Motion Generation
Spatial Attention Point Network for Deep-learning-based Robust Autonomous Robot Motion Generation
Hideyuki Ichiwara, Hiroshi Ito, Kenjiro Yamamoto, Hiroki Mori, Tetsuya Ogata
https://arxiv.org/abs/2103.01598
π§ 2019: Learning One-Shot Imitation from Humans without Humans
Learning One-Shot Imitation from Humans without Humans
Alessandro Bonardi, Stephen James, Andrew J. Davison
Videos can be found here: this https URL
https://arxiv.org/abs/1911.01103
π§ 2021: How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning
Namiko Saito, Tetsuya Ogata, Satoshi Funabashi, Hiroki Mori, Shigeki Sugano
Best Paper Award of Cognitive Robotics in ICRA2021 IEEE Robotics and Automation Letters 2021, Proceedings of the 2021 International Conference on Robotics and Automation (ICRA 2021), 2021
https://arxiv.org/abs/2106.02445
π§ 2022: Creating a Dynamic Quadrupedal Robotic Goalkeeper with Reinforcement Learning
Creating a Dynamic Quadrupedal Robotic Goalkeeper with Reinforcement Learning
Xiaoyu Huang, Zhongyu Li, Yanzhen Xiang, Yiming Ni, Yufeng Chi, Yunhao Li, Lizhi Yang, Xue Bin Peng, Koushil Sreenath
First two authors contributed equally. Accompanying video is at this https URL
https://arxiv.org/abs/2210.04435
π§ 2022: DynaVINS: A Visual-Inertial SLAM for Dynamic Environments
DynaVINS: A Visual-Inertial SLAM for Dynamic Environments
Seungwon Song, Hyungtae Lim, Alex Junho Lee, Hyun Myung
8 pages, accepted to IEEE RA-L (August 22, 2022)
https://arxiv.org/abs/2208.11500
π§ 2017: Overcoming Exploration in Reinforcement Learning with Demonstrations
Overcoming Exploration in Reinforcement Learning with Demonstrations
Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, Pieter Abbeel
8 pages, ICRA 2018
https://arxiv.org/abs/1709.10089
π§ 2022: RB2: Robotic Manipulation Benchmarking with a Twist
RB2: Robotic Manipulation Benchmarking with a Twist
Sudeep Dasari, Jianren Wang, Joyce Hong, Shikhar Bahl, Yixin Lin, Austin Wang, Abitha Thankaraj, Karanbir Chahal, Berk Calli, Saurabh Gupta, David Held, Lerrel Pinto, Deepak Pathak, Vikash Kumar, Abhinav Gupta
accepted at the NeurIPS 2021 Datasets and Benchmarks Track
https://arxiv.org/abs/2203.08098
π§ 2015: End-to-End Training of Deep Visuomotor Policies
End-to-End Training of Deep Visuomotor Policies
Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel
updating with revisions for JMLR final version
https://arxiv.org/abs/1504.00702
π§ 2020: SQUIRL: Robust and Efficient Learning from Video Demonstration of Long-Horizon Robotic Manipulation Tasks
SQUIRL: Robust and Efficient Learning from Video Demonstration of Long-Horizon Robotic Manipulation Tasks
Bohan Wu, Feng Xu, Zhanpeng He, Abhi Gupta, Peter K. Allen
8 pages
https://arxiv.org/abs/2003.04956
π§ 2015: Deep Spatial Autoencoders for Visuomotor Learning
Deep Spatial Autoencoders for Visuomotor Learning
Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, Pieter Abbeel
Published in the International Conference on Robotics and Automation (ICRA)
https://arxiv.org/abs/1509.06113
π§ 2022: Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Guided Visual Attention Model Based on Interactions Between Top-down and Bottom-up Information for Robot Pose Prediction
Hyogo Hiruma, Hiroki Mori, Tetsuya Ogata
https://arxiv.org/abs/2202.10036
π§ 2017: Motion Switching with Sensory and Instruction Signals by designing Dynamical Systems using Deep Neural Network
Motion Switching with Sensory and Instruction Signals by designing Dynamical Systems using Deep Neural Network
Kanata Suzuki, Hiroki Mori, Tetsuya Ogata
8 pages, 6 figures, accepted for publication in RA-L. An accompanied video is available at this this https URL
https://arxiv.org/abs/1712.05109
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.