We'll introduce the TorontoCity HD mapping benchmark, which covers the full greater Toronto area with 712.5 square-km of land, 8,439 km of roads, and around 400,000 buildings. Our benchmark provides different perspectives of the world captured from airplanes, drones, and cars driving around the city. Manually labeling such a large-scale dataset is infeasible. Instead, we propose to utilize different sources of high-precision maps to create our ground truth. Towards this goal, we develop algorithms that allow us to align all data sources with the maps while requiring minimal human supervision. We have designed a wide variety of tasks, including building height estimation (reconstruction), road centerline and curb extraction, building instance segmentation, building contour extraction (reorganization), semantic labeling, and scene-type classification (recognition). Our pilot study shows that most of these tasks are still difficult for modern convolutional neural networks.