There have been growing interest in applying deep learning for radiology image analysis such as tissue characterization, which is a key component of computer-aided diagnosis systems used for automatic lesion detection and further clinical planning. However, in practice the development of a robust and reliable deep learning model for computer-aided diagnosis is still highly challenging due to the combination of the high heterogeneity in the medical images and the relative lack of training samples. Specifically, annotation and labeling of the medical images is much more expensive and time-consuming than other applications and often involves manual labor from multiple domain experts. We'll propose a multi-stage, self-paced learning framework using a convolutional neural network (CNN) to classify computed tomography image patches. The key contribution is that we augment the size of training samples by refining the unlabeled instances with a self-paced learning CNN. By implementing the framework on the high performance computing server of the NVIDIA DGX-1 machine, we obtained the experimental result, showing that the self-pace boosted network consistently outperformed the original network even with very scarce manual labels. Such performance gain is obtained by increasing the computational load, which is becoming feasible thanks to the computational power provided by DGX-1, in exchange of human labor. Applications with limited training samples such as medical image analysis can benefit from using the proposed framework.