Most vision systems can tell you what is in an image. Far fewer can tell you where that object sits in three dimensions from a single photograph. We release WildDet3D, an open model for promptable monocular 3D object detection. Given a single RGB image, it predicts 3D bounding boxes estimating an object position, size, and orientation in metric coordinates, and accepts multiple prompt types including text queries, point prompts, and 2D bounding boxes. WildDet3D generalizes across cameras with different resolutions, aspect ratios, and optics without fine-tuning, and can incorporate additional geometric signals like sparse depth, LiDAR, or time-of-flight data. Alongside the model, we release WildDet3D-Data, containing over one million images with 3.7 million verified 3D annotations spanning more than 13K object categories, including over 100K human-annotated images. WildDet3D achieves state-of-the-art performance on Omni3D and strong zero-shot transfer to Argoverse 2, ScanNet, and our in-the-wild benchmark spanning 700+ categories.