يعرض 41 - 50 نتائج من 9,464 نتيجة بحث عن '"Ross, David"', وقت الاستعلام: 1.34s تنقيح النتائج
  1. 41
    دورية أكاديمية

    لا يتم عرض هذه النتيجة على الضيوف.

  2. 42
    دورية أكاديمية

    المصدر: Cancers. 13(9)

    الوصف: After implementing a successful hepatitis C elimination program, the Veterans Health Administration's (VHA) Hepatic Innovation Team (HIT) Collaborative pivoted to focus on improving cirrhosis care. This national program developed teams of providers across the country and engaged them in using systems redesign methods and population health approaches to improve care. The HIT Collaborative developed an Advanced Liver Disease (ALD) Dashboard to identify Veterans with cirrhosis who were due for surveillance for hepatocellular carcinoma (HCC) and other liver care, promoted the use of an HCC Clinical Reminder in the electronic health record, and provided training and networking opportunities. This evaluation aimed to describe the VHA's approach to improving cirrhosis care and identify the facility factors and HIT activities associated with HCC surveillance rates, using a quasi-experimental design. Across all VHA facilities, as the HIT focused on cirrhosis between 2018-2019, HCC surveillance rates increased from 46% (IQR 37-53%) to 51% (IQR 42-60%, p < 0.001). The median HCC surveillance rate was 57% in facilities with high ALD Dashboard utilization compared with 45% in facilities with lower utilization (p < 0.001) and 58% in facilities using the HCC Clinical Reminder compared with 47% in facilities not using this tool (p < 0.001) in FY19. Increased use of the ALD Dashboard and adoption of the HCC Clinical Reminder were independently, significantly associated with HCC surveillance rates in multivariate models, controlling for other facility characteristics. In conclusion, the VHA's HIT Collaborative is a national healthcare initiative associated with significant improvement in HCC surveillance rates.

    وصف الملف: application/pdf

  3. 43
    دورية

    المؤلفون: Ross, David

    المصدر: Chartered accountants journal, Apr 2006; v.85 n.3:p.21-23

  4. 44
    تقرير

    الوصف: Videos on the Internet are paired with pieces of text, such as titles and descriptions. This text typically describes the most important content in the video, such as the objects in the scene and the actions being performed. Based on this observation, we propose to use text as a method for learning video representations. To accomplish this, we propose a data collection process and use it to collect 70M video clips shared publicly on the Internet, and we then train a model to pair each video with its associated text. We evaluate the model on several down-stream action recognition tasks, including Kinetics, HMDB-51, and UCF-101. We find that this approach is an effective method of pre-training video representations. Specifically, it outperforms all existing methods for self-supervised and cross-modal video representation learning.

    الوصول الحر: http://arxiv.org/abs/2007.14937Test

  5. 45
    تقرير

    الوصف: Automatic video captioning aims to train models to generate text descriptions for all segments in a video, however, the most effective approaches require large amounts of manual annotation which is slow and expensive. Active learning is a promising way to efficiently build a training set for video captioning tasks while reducing the need to manually label uninformative examples. In this work we both explore various active learning approaches for automatic video captioning and show that a cluster-regularized ensemble strategy provides the best active learning approach to efficiently gather training sets for video captioning. We evaluate our approaches on the MSR-VTT and LSMDC datasets using both transformer and LSTM based captioning models and show that our novel strategy can achieve high performance while using up to 60% fewer training data than the strong state of the art baselines.
    Comment: Published at the 15th Asian Conference on Computer Vision (ACCV 2020)

    الوصول الحر: http://arxiv.org/abs/2007.13913Test

  6. 46
    تقرير

    الوصف: Semantic segmentation of 3D meshes is an important problem for 3D scene understanding. In this paper we revisit the classic multiview representation of 3D meshes and study several techniques that make them effective for 3D semantic segmentation of meshes. Given a 3D mesh reconstructed from RGBD sensors, our method effectively chooses different virtual views of the 3D mesh and renders multiple 2D channels for training an effective 2D semantic segmentation model. Features from multiple per view predictions are finally fused on 3D mesh vertices to predict mesh semantic segmentation labels. Using the large scale indoor 3D semantic segmentation benchmark of ScanNet, we show that our virtual views enable more effective training of 2D semantic segmentation networks than previous multiview approaches. When the 2D per pixel predictions are aggregated on 3D surfaces, our virtual multiview fusion method is able to achieve significantly better 3D semantic segmentation results compared to all prior multiview approaches and competitive with recent 3D convolution approaches.
    Comment: To appear in ECCV 2020

    الوصول الحر: http://arxiv.org/abs/2007.13138Test

  7. 47
    تقرير

    الوصف: Detecting objects in 3D LiDAR data is a core technology for autonomous driving and other robotics applications. Although LiDAR data is acquired over time, most of the 3D object detection algorithms propose object bounding boxes independently for each frame and neglect the useful information available in the temporal domain. To address this problem, in this paper we propose a sparse LSTM-based multi-frame 3d object detection algorithm. We use a U-Net style 3D sparse convolution network to extract features for each frame's LiDAR point-cloud. These features are fed to the LSTM module together with the hidden and memory features from last frame to predict the 3d objects in the current frame as well as hidden and memory features that are passed to the next frame. Experiments on the Waymo Open Dataset show that our algorithm outperforms the traditional frame by frame approach by 7.5% mAP@0.7 and other multi-frame approaches by 1.2% while using less memory and computation per frame. To the best of our knowledge, this is the first work to use an LSTM for 3D object detection in sparse point clouds.
    Comment: To appear in ECCV 2020

    الوصول الحر: http://arxiv.org/abs/2007.12392Test

  8. 48
    تقرير

    الوصف: We present a simple and flexible object detection framework optimized for autonomous driving. Building on the observation that point clouds in this application are extremely sparse, we propose a practical pillar-based approach to fix the imbalance issue caused by anchors. In particular, our algorithm incorporates a cylindrical projection into multi-view feature learning, predicts bounding box parameters per pillar rather than per point or per anchor, and includes an aligned pillar-to-point projection module to improve the final prediction. Our anchor-free approach avoids hyperparameter search associated with past methods, simplifying 3D object detection while significantly improving upon state-of-the-art.
    Comment: Accepted to ECCV2020

    الوصول الحر: http://arxiv.org/abs/2007.10323Test

  9. 49
    تقرير

    الوصف: This paper describes the AVA-Kinetics localized human actions video dataset. The dataset is collected by annotating videos from the Kinetics-700 dataset using the AVA annotation protocol, and extending the original AVA dataset with these new AVA annotated Kinetics clips. The dataset contains over 230k clips annotated with the 80 AVA action classes for each of the humans in key-frames. We describe the annotation process and provide statistics about the new dataset. We also include a baseline evaluation using the Video Action Transformer Network on the AVA-Kinetics dataset, demonstrating improved performance for action classification on the AVA test set. The dataset can be downloaded from https://research.google.com/avaTest/
    Comment: 8 pages, 8 figures

    الوصول الحر: http://arxiv.org/abs/2005.00214Test

  10. 50
    تقرير

    الوصف: We propose DOPS, a fast single-stage 3D object detection method for LIDAR data. Previous methods often make domain-specific design decisions, for example projecting points into a bird-eye view image in autonomous driving scenarios. In contrast, we propose a general-purpose method that works on both indoor and outdoor scenes. The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes. 3D bounding box parameters are estimated in one pass for every point, aggregated through graph convolutions, and fed into a branch of the network that predicts latent codes representing the shape of each detected object. The latent shape space and shape decoder are learned on a synthetic dataset and then used as supervision for the end-to-end training of the 3D object detection pipeline. Thus our model is able to extract shapes without access to ground-truth shape information in the target dataset. During experiments, we find that our proposed method achieves state-of-the-art results by ~5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Waymo Open Dataset, while reproducing the shapes of detected cars.
    Comment: To appear in CVPR 2020

    الوصول الحر: http://arxiv.org/abs/2004.01170Test