Medical imaging is a cornerstone of modern healthcare delivery, providing essential insights for effective diagnosis and treatment planning. Among the myriad imaging modalities, computed tomography (CT) and chest X-rays stand out for their widespread clinical use with approximately 400 million CT and 1.4 billion chest X-ray examinations are performed globally each year. Recent advancements in detector technology have given rise to photon-counting CT, which promises improved spatial and energy resolution along with enhanced low-dose imaging capabilities. However, elevated image noise and ring artifacts–stemming from higher spatial and energy resolution and inconsistencies in detector elements–pose significant hurdles, degrading image quality and complicating the diagnostic process. Beyond CT imaging, the volume of chest X-ray examinations continues to grow, placing increasing pressure on radiology departments that are already stretched thin. Moreover, advanced and innovate techniques in CT leads to a steady increase in the number of images that the radiologist are required to read, further exacerbating the workloads. To address these challenges, this thesis leverages generative artificial intelligence methods throughout the medical imaging value chain. For photon-counting CT imaging, this thesis address inverse problems using diffusion and Poisson flow models. Syn2Real synthesizes realistic ring artifacts to effciently generate training data for deep learning-based artifact correction. For image denoising, the thesis introduces methods that capitalize on the robustness of PFGM++ in supervised and unsupervised versions of posterior sampling Poisson flow generative models, and finally culminating in Poisson flow consistency models—a novel family of deep generative models that combines the robustness of PFGM++ with the effcient single-step sampling and the flexibility of consistency models. Moreover, this thesis works towards addressing the global shortage of radiologists, by improving medical vision-language models through CheXalign: a novel framework that leverages publicly available datasets, containing paired chest X-rays and radiology reports written in a clinical setting, and reference-based metrics to generate high quality preference data. This in turns enables the application of direct alignment algorithms that increase the probability of good reports, while decreasing the probability of bad ones, improving the overall results. Partial automation of chest X-ray radiology report generation—in which language models are used to draft initial reports—hold great promise for more effcient workflows, reducing burn-out, and allowing radiologists to allocate more time to more advanced imaging studies, such as photon-counting CT.