前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >机器人相关学术速递[11.12]

机器人相关学术速递[11.12]

作者头像
公众号-arXiv每日学术速递
发布2021-11-17 11:05:02
3000
发布2021-11-17 11:05:02
举报
文章被收录于专栏:arXiv每日学术速递

cs.RO机器人相关,共计13篇

【1】 Full-Body Visual Self-Modeling of Robot Morphologies 标题:机器人形态的全身视觉自建模 链接:https://arxiv.org/abs/2111.06389

作者:Boyuan Chen,Robert Kwiatkowski,Carl Vondrick,Hod Lipson 机构:Columbia University 备注:Project website: this https URL 摘要:物理身体的内部计算模型是机器人和动物计划和控制其行动的能力的基础。这些“自我模型”允许机器人考虑多个可能的未来行动的结果,而不在物理现实中尝试它们。完全数据驱动自建模的最新进展使机器能够直接从任务无关的交互数据中学习自己的正向运动学。然而,正向运动学模型只能预测形态的有限方面,如末端执行器的位置或关节和质量的速度。一个关键的挑战是建模整个形态学和运动学,而不事先知道形态学的哪些方面与未来任务相关。在这里,我们提出了一种更有用的自建模形式,可以根据机器人的状态回答空间占用查询,而不是直接建模正向运动学。这种查询驱动的自模型在空间域中是连续的、高效的、完全可微的和运动感知的。在物理实验中,我们演示了视觉自我模型如何精确到工作空间的1%,从而使机器人能够执行各种运动规划和控制任务。视觉自建模还可以让机器人检测、定位和恢复现实世界中的损坏,从而提高机器的弹性。我们的项目网站位于:https://robot-morphology.cs.columbia.edu/ 摘要:Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions. These "self-models" allow robots to consider outcomes of multiple possible future actions, without trying them out in physical reality. Recent progress in fully data-driven self-modeling has enabled machines to learn their own forward kinematics directly from task-agnostic interaction data. However, forward-kinema\-tics models can only predict limited aspects of the morphology, such as the position of end effectors or velocity of joints and masses. A key challenge is to model the entire morphology and kinematics, without prior knowledge of what aspects of the morphology will be relevant to future tasks. Here, we propose that instead of directly modeling forward-kinematics, a more useful form of self-modeling is one that could answer space occupancy queries, conditioned on the robot's state. Such query-driven self models are continuous in the spatial domain, memory efficient, fully differentiable and kinematic aware. In physical experiments, we demonstrate how a visual self-model is accurate to about one percent of the workspace, enabling the robot to perform various motion planning and control tasks. Visual self-modeling can also allow the robot to detect, localize and recover from real-world damage, leading to improved machine resiliency. Our project website is at: https://robot-morphology.cs.columbia.edu/

【2】 Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation 标题:面向机器人操作的运动规划器增广策略提取与视觉控制策略 链接:https://arxiv.org/abs/2111.06383

作者:I-Chun Arthur Liu,Shagun Uppal,Gaurav S. Sukhatme,Joseph J. Lim,Peter Englert,Youngwoon Lee 机构:Cognitive Learning for Vision and Robotics Lab, Robotic Embedded Systems Laboratory, University of Southern California 备注:Published at the Conference on Robot Learning (CoRL) 2021 摘要:在现实的、有障碍的环境中学习复杂的操作任务是一个具有挑战性的问题,因为在存在障碍物和高维视觉观察的情况下进行了艰苦的探索。之前的工作通过整合运动规划和强化学习来解决探索问题。但是,motion planner增强策略需要访问状态信息,这在现实环境中通常不可用。为此,我们建议通过(1)视觉行为克隆将基于状态的运动规划器增强策略提取为视觉控制策略,以消除运动规划器依赖性及其抖动的运动,(2)在行为克隆代理的平滑轨迹指导下基于视觉的强化学习。我们评估了我们的方法在障碍环境中的三个操作任务,并将其与各种强化学习和模仿学习基线进行比较。结果表明,我们的框架具有很高的样本效率,并且优于最新的算法。此外,再加上区域随机化,我们的策略能够在有干扰物的情况下将Zero-Shot转移到看不见的环境中。代码和视频可在https://clvrai.com/mopa-pd 摘要:Learning complex manipulation tasks in realistic, obstructed environments is a challenging problem due to hard exploration in the presence of obstacles and high-dimensional visual observations. Prior work tackles the exploration problem by integrating motion planning and reinforcement learning. However, the motion planner augmented policy requires access to state information, which is often not available in the real-world settings. To this end, we propose to distill a state-based motion planner augmented policy to a visual control policy via (1) visual behavioral cloning to remove the motion planner dependency along with its jittery motion, and (2) vision-based reinforcement learning with the guidance of the smoothed trajectories from the behavioral cloning agent. We evaluate our method on three manipulation tasks in obstructed environments and compare it against various reinforcement learning and imitation learning baselines. The results demonstrate that our framework is highly sample-efficient and outperforms the state-of-the-art algorithms. Moreover, coupled with domain randomization, our policy is capable of zero-shot transfer to unseen environment settings with distractors. Code and videos are available at https://clvrai.com/mopa-pd

【3】 Driver-Specific Risk Recognition in Interactive Driving Scenarios using Graph Representation 标题:交互式驾驶场景中基于图表示的特定驾驶员风险识别 链接:https://arxiv.org/abs/2111.06342

作者:Jinghang Li,Chao Lu,Penghui Li,Zheyu Zhang,Cheng Gong,Jianwei Gong 备注:Submitted to IEEE Transactions on Vehicular Technology 摘要:本文提出了一个针对自动驾驶车辆的驾驶员特定风险识别框架,该框架可以提取车辆之间的交互作用。该提取以驾驶员认知方式针对城市驾驶场景进行,以提高危险场景的识别精度。首先,对驾驶员的操作数据进行聚类分析,学习不同驾驶员风险场景的主观评价,并为每个场景生成相应的风险标签。其次,采用图形表示模型(GRM)将真实驾驶场景中的动态车辆、车辆间交互和静态交通标志的特征统一并构造成图形。特定于驾驶员的风险标签提供了基本事实,以获取不同驾驶员的风险评估标准。此外,图形模型表示驾驶场景的多个特征。因此,该框架可以学习不同驾驶员驾驶场景的风险评估模式,并建立特定于驾驶员的风险标识符。最后,通过使用多个驾驶员收集的真实城市驾驶数据集进行的实验,评估了所提出框架的性能。结果表明,该框架能够准确识别实际驾驶环境中的风险及其水平。 摘要:This paper presents a driver-specific risk recognition framework for autonomous vehicles that can extract inter-vehicle interactions. This extraction is carried out for urban driving scenarios in a driver-cognitive manner to improve the recognition accuracy of risky scenes. First, clustering analysis is applied to the operation data of drivers for learning the subjective assessment of risky scenes of different drivers and generating the corresponding risk label for each scene. Second, the graph representation model (GRM) is adopted to unify and construct the features of dynamic vehicles, inter-vehicle interactions and static traffic markings in real driving scenes into graphs. The driver-specific risk label provides ground truth to capture the risk evaluation criteria of different drivers. In addition, the graph model represents multiple features of the driving scenes. Therefore, the proposed framework can learn the risk-evaluating pattern of driving scenes of different drivers and establish driver-specific risk identifiers. Last, the performance of the proposed framework is evaluated via experiments conducted using real-world urban driving datasets collected by multiple drivers. The results show that the risks and their levels in real driving environments can be accurately recognized by the proposed framework.

【4】 An Online Multi-Index Approach to Human Ergonomics Assessment in the Workplace 标题:工作场所人体工效学在线多指标评价方法 链接:https://arxiv.org/abs/2111.06323

作者:Marta Lorenzini,Wansoo Kim,Arash Ajoudani 机构: Italian Institute of Technology, Kim is also with Hanyang University 备注:12 pages, 8 figures, 3 tables, Submitted to IEEE Transactions on Human-Machine System 摘要:与工作相关的肌肉骨骼疾病(WMSDs)仍然是当今欧盟的主要职业安全和健康问题之一。因此,持续跟踪工人暴露于可能有助于其发展的因素是至关重要的。本文介绍了一种在线监测工人运动量和动态量的方法,提供了对他们日常工作所需物理负荷的现场估计。定义了一组工效学指标,以说明WMSDs的多个潜在影响因素,并重视工人的特定主题要求。为了评估提议的框架,对12名受试者进行了全面的实验分析,考虑到代表制造业典型工作活动的任务。对于每项任务,经过统计分析,并由表面肌电图(sEMG)分析结果支持,确定了更好地解释潜在物理负荷的工效学指标。还与公认的标准工具进行了比较,以评估工作场所中的人体工效学,强调拟议框架带来的好处。结果表明,该框架在识别物理风险因素,从而采取预防措施方面具有很大潜力。这项研究的另一个同样重要的贡献是建立了一个关于人体动力测量的综合数据库,该数据库保存了执行典型工业任务的健康受试者的多种感官数据。 摘要:Work-related musculoskeletal disorders (WMSDs) remain one of the major occupational safety and health problems in the European Union nowadays. Thus, continuous tracking of workers' exposure to the factors that may contribute to their development is paramount. This paper introduces an online approach to monitor kinematic and dynamic quantities on the workers, providing on the spot an estimate of the physical load required in their daily jobs. A set of ergonomic indexes is defined to account for multiple potential contributors to WMSDs, also giving importance to the subject-specific requirements of the workers. To evaluate the proposed framework, a thorough experimental analysis was conducted on twelve human subjects considering tasks that represent typical working activities in the manufacturing sector. For each task, the ergonomic indexes that better explain the underlying physical load were identified, following a statistical analysis, and supported by the outcome of a surface electromyography (sEMG) analysis. A comparison was also made with a well-recognised and standard tool to evaluate human ergonomics in the workplace, to highlight the benefits introduced by the proposed framework. Results demonstrate the high potential of the proposed framework in identifying the physical risk factors, and therefore to adopt preventive measures. Another equally important contribution of this study is the creation of a comprehensive database on human kinodynamic measurements, which hosts multiple sensory data of healthy subjects performing typical industrial tasks.

【5】 6D Pose Estimation with Combined Deep Learning and 3D Vision Techniques for a Fast and Accurate Object Grasping 标题:快速准确抓取目标的深度学习与三维视觉相结合的6D位姿估计 链接:https://arxiv.org/abs/2111.06276

作者:Tuan-Tang Le,Trung-Son Le,Yu-Ru Chen,Joel Vidal,Chyi-Yeu Lin 摘要:实时机器人抓取,支持后续精确的手操作任务,是高度先进的自主系统的优先目标。然而,这样一种算法,可以执行足够准确的抓取时间效率尚未找到。该文提出了一种新的方法,该方法采用两阶段的方法,将使用深度神经网络的快速2D对象识别与随后基于点对特征框架的精确快速的6D姿势估计相结合,形成一个实时3D对象识别和多对象类场景的抓取解决方案。建议的解决方案有可能在实时应用程序上稳健地执行,需要效率和准确性。为了验证我们的方法,我们进行了广泛而彻底的实验,包括我们自己的数据集的艰苦准备。实验结果表明,该方法在5cm5度度量上的准确率为97.37%,在平均距离度量上的准确率为99.37%。实验结果表明,使用所提出的方法,总体相对改善62%(5cm5度度量)和52.48%(平均距离度量)。此外,姿势估计执行在运行时间上也显示出平均47.6%的改进。最后,为了说明系统在实时操作中的整体效率,进行了一个拾取和放置机器人实验,显示了令人信服的成功率和90%的准确率。此实验视频可在https://sites.google.com/view/dl-ppf6dpose/. 摘要:Real-time robotic grasping, supporting a subsequent precise object-in-hand operation task, is a priority target towards highly advanced autonomous systems. However, such an algorithm which can perform sufficiently-accurate grasping with time efficiency is yet to be found. This paper proposes a novel method with a 2-stage approach that combines a fast 2D object recognition using a deep neural network and a subsequent accurate and fast 6D pose estimation based on Point Pair Feature framework to form a real-time 3D object recognition and grasping solution capable of multi-object class scenes. The proposed solution has a potential to perform robustly on real-time applications, requiring both efficiency and accuracy. In order to validate our method, we conducted extensive and thorough experiments involving laborious preparation of our own dataset. The experiment results show that the proposed method scores 97.37% accuracy in 5cm5deg metric and 99.37% in Average Distance metric. Experiment results have shown an overall 62% relative improvement (5cm5deg metric) and 52.48% (Average Distance metric) by using the proposed method. Moreover, the pose estimation execution also showed an average improvement of 47.6% in running time. Finally, to illustrate the overall efficiency of the system in real-time operations, a pick-and-place robotic experiment is conducted and has shown a convincing success rate with 90% of accuracy. This experiment video is available at https://sites.google.com/view/dl-ppf6dpose/.

【6】 Multi-Resolution Elevation Mapping and Safe Landing Site Detection with Applications to Planetary Rotorcraft 标题:多分辨率高程测绘和安全着陆点探测及其在行星旋翼机上的应用 链接:https://arxiv.org/abs/2111.06271

作者:Pascal Schoppmann,Pedro F. Proença,Jeff Delaune,Michael Pantic,Timo Hinzmann,Larry Matthies,Roland Siegwart,Roland Brockers 机构: 1Jet Propulsion Laboratory California Institute of Technology 备注:8 pages, 12 figures. Accepted at IROS 2021 摘要:在本文中,我们提出了一种资源高效的方法,为自主无人机提供车载感知方法,以便在复杂3D地形上飞行时检测安全、无危险的着陆点。我们将通过“运动结构”方法从一系列单目图像中获取的3D测量值聚合到一个局部、以机器人为中心、多分辨率的飞越地形高程图中,该地图根据横向表面分辨率(像素足迹)融合深度测量值在基于动态详细程度概念的概率框架中。地图聚合只需要深度图和相关姿势,这些都是通过车载视觉里程计算法获得的。然后,一种有效的着陆点检测方法利用基础多分辨率地图的特征,根据重建地形表面的坡度、粗糙度和质量检测安全着陆点。在模拟和真实实验中,对测绘和着陆点探测模块的性能评估进行了独立和联合分析,以确定所提出方法的有效性。 摘要:In this paper, we propose a resource-efficient approach to provide an autonomous UAV with an on-board perception method to detect safe, hazard-free landing sites during flights over complex 3D terrain. We aggregate 3D measurements acquired from a sequence of monocular images by a Structure-from-Motion approach into a local, robot-centric, multi-resolution elevation map of the overflown terrain, which fuses depth measurements according to their lateral surface resolution (pixel-footprint) in a probabilistic framework based on the concept of dynamic Level of Detail. Map aggregation only requires depth maps and the associated poses, which are obtained from an onboard Visual Odometry algorithm. An efficient landing site detection method then exploits the features of the underlying multi-resolution map to detect safe landing sites based on slope, roughness, and quality of the reconstructed terrain surface. The evaluation of the performance of the mapping and landing site detection modules are analyzed independently and jointly in simulated and real-world experiments in order to establish the efficacy of the proposed approach.

【7】 On the Problem of Reformulating Systems with Uncertain Dynamics as a Stochastic Differential Equation 标题:关于将不确定动态系统转化为随机微分方程的问题 链接:https://arxiv.org/abs/2111.06084

作者:Thomas Lew,Apoorva Sharma,James Harrison,Edward Schmerling,Marco Pavone 机构: 1The authors are with the Department of Aeronautics & Astronautics, Stan-ford University

【8】 Learning by Cheating : An End-to-End Zero Shot Framework for Autonomous Drone Navigation 标题:在欺骗中学习:一种端到端的无人机自主导航Zero-Shot框架 链接:https://arxiv.org/abs/2111.06056

作者:Praveen Venkatesh,Viraj Shah,Vrutik Shah,Yash Kamble,Joycee Mekie 机构:Indian Institute of Technology, Gandhinagar, India

【9】 csBoundary: City-scale Road-boundary Detection in Aerial Images for High-definition Maps 标题:CS边界:高清地图航空影像中的城市尺度道路边界检测 链接:https://arxiv.org/abs/2111.06020

作者:Zhenhua Xu,Yuxuan Liu,Lu Gan,Xiangcheng Hu,Yuxiang Sun,Lujia Wang,Ming Liu 摘要:高清(HD)地图可以为自动驾驶提供静态交通环境的精确几何和语义信息。道路边界是高清地图中包含的最重要信息之一,因为它区分道路区域和越野区域,可以引导车辆在道路区域内行驶。但在城市尺度上为高清地图标注道路边界需要耗费大量人力。为了实现自动高清地图标注,目前的工作使用语义分割或迭代图增长进行道路边界检测。然而,前者不能确保拓扑正确性,因为它在像素级工作,而后者则存在效率低下和漂移问题。为了解决上述问题,在这封信中,我们提出了一种称为csBoundary的新系统,用于在城市尺度上自动检测道路边界,以便进行高清地图注释。我们的网络以一个航空图像块作为输入,并从该图像直接推断出连续的道路边界图(即顶点和边)。为了生成城市尺度的道路边界图,我们将从所有图像块中获得的图形缝合起来。我们的csBoundary在公共基准数据集上进行了评估和比较。结果表明了我们的优越性。随附的演示视频可在我们的项目页面\url上找到{https://sites.google.com/view/csboundary/}. 摘要:High-Definition (HD) maps can provide precise geometric and semantic information of static traffic environments for autonomous driving. Road-boundary is one of the most important information contained in HD maps since it distinguishes between road areas and off-road areas, which can guide vehicles to drive within road areas. But it is labor-intensive to annotate road boundaries for HD maps at the city scale. To enable automatic HD map annotation, current work uses semantic segmentation or iterative graph growing for road-boundary detection. However, the former could not ensure topological correctness since it works at the pixel level, while the latter suffers from inefficiency and drifting issues. To provide a solution to the aforementioned problems, in this letter, we propose a novel system termed csBoundary to automatically detect road boundaries at the city scale for HD map annotation. Our network takes as input an aerial image patch, and directly infers the continuous road-boundary graph (i.e., vertices and edges) from this image. To generate the city-scale road-boundary graph, we stitch the obtained graphs from all the image patches. Our csBoundary is evaluated and compared on a public benchmark dataset. The results demonstrate our superiority. The accompanied demonstration video is available at our project page \url{https://sites.google.com/view/csboundary/}.

【10】 Yaw-Guided Imitation Learning for Autonomous Driving in Urban Environments 标题:偏航引导的城市环境下自动驾驶模拟学习 链接:https://arxiv.org/abs/2111.06017

作者:Yandong Liu,Chengzhong Xu,Hui Kong 机构:‡ The State Key Laboratory of Internet of Things for Smart City (SKL-IOTSC), Department of Computer Science, University of Macau 备注:9 pages, 9 figures

【11】 AlphaGarden: Learning to Autonomously Tend a Polyculture Garden 标题:AlphaGarden:学会自主管理多元文化花园 链接:https://arxiv.org/abs/2111.06014

作者:Mark Presten,Yahav Avigal,Mark Theis,Satvik Sharma,Rishi Parikh,Shrey Aeron,Sandeep Mukherjee,Sebastian Oehme,Simeon Adebola,Walter Teitelbaum,Varun Kamat,Ken Goldberg 备注:7 pages, 7 figures, 2 tables 摘要:本文介绍了AlphaGarden:一个在1.5m x 3.0m物理试验台上修剪和灌溉活体植物的自治混合栽培园。AlphaGarden使用头顶摄像头和传感器跟踪植物分布和土壤湿度。我们对单个植物的生长和种间动态进行建模,以制定一项政策,选择行动以最大限度地提高叶片覆盖率和多样性。对于自动修剪,AlphaGarden使用两个定制设计的修剪工具和一个经过训练的神经网络来检测修剪点。我们展示了四个60天花园周期的结果。结果表明,AlphaGarden可以通过修剪剪自动实现0.96标准化多样性,同时在周期峰值期间保持0.86的平均冠层覆盖率。代码、数据集和补充资料可在https://github.com/BerkeleyAutomation/AlphaGarden. 摘要:This paper presents AlphaGarden: an autonomous polyculture garden that prunes and irrigates living plants in a 1.5m x 3.0m physical testbed. AlphaGarden uses an overhead camera and sensors to track the plant distribution and soil moisture. We model individual plant growth and interplant dynamics to train a policy that chooses actions to maximize leaf coverage and diversity. For autonomous pruning, AlphaGarden uses two custom-designed pruning tools and a trained neural network to detect prune points. We present results for four 60-day garden cycles. Results suggest AlphaGarden can autonomously achieve 0.96 normalized diversity with pruning shears while maintaining an average canopy coverage of 0.86 during the peak of the cycle. Code, datasets, and supplemental material can be found at https://github.com/BerkeleyAutomation/AlphaGarden.

【12】 A soft thumb-sized vision-based sensor with accurate all-round force perception 标题:一种具有准确全方位力感知的软拇指大小视觉传感器 链接:https://arxiv.org/abs/2111.05934

作者:Huanbo Sun,Katherine J. Kuchenbecker,Georg Martius 机构: Autonomous Learning Group, Max Planck Institute for Intelligent Systems, Tübingen, Germany., Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany. 备注:1 table, 5 figures, 24 pages for the main manuscript. 5 tables, 12 figures, 27 pages for the supplementary material. 8 supplementary videos 摘要:基于视觉的触觉传感器由于价格合理的高分辨率摄像机和成功的计算机视觉技术,已成为机器人触摸的一种很有前途的方法。然而,它们的物理设计和提供的信息还不能满足实际应用的要求。我们提出了一种健壮、柔软、低成本、基于视觉、拇指大小的三维触觉传感器Insight:它在整个锥形传感表面上持续提供方向力分布图。该传感器围绕内置单目摄像头构建,只有一层弹性体模压在刚性框架上,以保证灵敏度、鲁棒性和软接触。此外,Insight是第一个使用准直器将光度立体光和结构光结合起来检测其易于更换的柔性外壳的三维变形的系统。力信息由深度神经网络推断,该网络将图像映射到三维接触力(法向和剪切)的空间分布。Insight的总体空间分辨率为0.4 mm,力大小精度约为0.03 N,力方向精度约为5度,范围为0.03-2 N,适用于具有不同接触面积的多个不同触点。所提出的硬件和软件设计概念可应用于各种机器人部件。 摘要:Vision-based haptic sensors have emerged as a promising approach to robotic touch due to affordable high-resolution cameras and successful computer-vision techniques. However, their physical design and the information they provide do not yet meet the requirements of real applications. We present a robust, soft, low-cost, vision-based, thumb-sized 3D haptic sensor named Insight: it continually provides a directional force-distribution map over its entire conical sensing surface. Constructed around an internal monocular camera, the sensor has only a single layer of elastomer over-molded on a stiff frame to guarantee sensitivity, robustness, and soft contact. Furthermore, Insight is the first system to combine photometric stereo and structured light using a collimator to detect the 3D deformation of its easily replaceable flexible outer shell. The force information is inferred by a deep neural network that maps images to the spatial distribution of 3D contact force (normal and shear). Insight has an overall spatial resolution of 0.4 mm, force magnitude accuracy around 0.03 N, and force direction accuracy around 5 degrees over a range of 0.03--2 N for numerous distinct contacts with varying contact area. The presented hardware and software design concepts can be transferred to a wide variety of robot parts.

【13】 A Portable and Passive Gravity Compensation Arm Support for Drone Teleoperation 标题:一种用于无人机遥操作的便携式被动重力补偿臂支架 链接:https://arxiv.org/abs/2111.05891

作者:Carine Rognon,Loic Grossen,Stefano Mintchev,Jenifer Miehlbradt,Silvestro Micera,Dario Floreano 机构:Laboratory of Intelligent Systems, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, Environmental Robotics Laboratory, Dep. of Environmental Systems Science, ETHZ, Zurich, Switzerland & 备注:13 pages, 12 figures, 4 tables 摘要:基于手势的界面通常用于实现更自然、更直观的机器人遥操作。然而,有时,手势控制需要姿势或动作,这些姿势或动作会给用户造成严重的疲劳。在之前的用户研究中,我们证明了na\“ive用户可以在手臂张开时通过躯干运动控制固定翼无人机。然而,这种姿势会导致严重的手臂疲劳。在这项工作中,我们提出了一种被动手臂支撑,用于补偿手臂重量,平均扭矩误差小于0.005 N/kg,用于受试者飞行97%以上的运动范围,从而减少肩部肌肉疲劳平均58%。此外,该手臂支撑设计用于从第1百分位女性到第99百分位男性的身体尺寸。手臂支撑的性能分析采用机械模型进行描述,其实现通过机械特性和用户研究进行验证,该研究测量飞行性能、肩部肌肉活动和用户接受度。 摘要:Gesture-based interfaces are often used to achieve a more natural and intuitive teleoperation of robots. Yet, sometimes, gesture control requires postures or movements that cause significant fatigue to the user. In a previous user study, we demonstrated that na\"ive users can control a fixed-wing drone with torso movements while their arms are spread out. However, this posture induced significant arm fatigue. In this work, we present a passive arm support that compensates the arm weight with a mean torque error smaller than 0.005 N/kg for more than 97% of the range of motion used by subjects to fly, therefore reducing muscular fatigue in the shoulder of on average 58%. In addition, this arm support is designed to fit users from the body dimension of the 1st percentile female to the 99th percentile male. The performance analysis of the arm support is described with a mechanical model and its implementation is validated with both a mechanical characterization and a user study, which measures the flight performance, the shoulder muscle activity and the user acceptance.

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2021-11-12,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 arXiv每日学术速递 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
图像处理
图像处理基于腾讯云深度学习等人工智能技术,提供综合性的图像优化处理服务,包括图像质量评估、图像清晰度增强、图像智能裁剪等。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档