Modern medical “Imaging”: the technology is here, adoption is the hurdle

Hey NYTimes and other made fun of Japanese using fax still. I see the similarity here. The gap between a technology capability and a practice. In fact, almost every modern scanner and PACS (picture archive) system today already supports web-based access, 3D visualization, and secure data sharing – it’s just that patients rarely see these features. For example, a recent survey found 100% of hospitals still hand patients CDs of their scans, while only 8% would email images and only 4% offered any patient portal viewing. At this day and age, because it’s heavy is no longer an option. It is engineering’s job to enable access and UX job to dream and innovate about accessibility. Plus imagine what we can do if we have 3D access?

The issue is not a lack of technical capability, but that many hospitals cling to legacy workflows or simply haven’t built user-friendly interfaces yet. You – as the patient – should understand that you could have interactive, web-ready images. The responsibility lies with healthcare IT and engineers to deploy modern standards (like WADO-RS and SMART-on-FHIR) so you can easily view and share your own scans, rather than being limited to old prints or disks.

DICOM and “Digital Film”: Why We Still Print Scans

The core standard for medical images, DICOM, was built in an era of film and CDs. In practice, many hospitals still treat X‑rays and scans like paper documents. For instance, some systems simply send images to a printer or produce a static file, rather than letting you scroll and zoom on screen. The standard does support web protocols now – for example, the DICOMweb extension uses HTTP/HTTPS to deliver images to web apps – but that part of the technology is often unused. As a result, patients often receive images in “filmless” ways that mimic old films: locked in a special viewer or on a media disk.

This means your doctor might print out or burn your CT/MRI on a CD/DVD instead of pushing it to an online viewer. It’s not because imaging machines can’t send digital data; they can. It’s because the IT systems at hospitals are still partially designed around old habits. Many clinicians have not yet rolled out patient‑friendly web viewers. Until they do, patients end up with cumbersome CDs or PDFs.

Screenshot 2025-12-13 at 6.19.04 PM.png

Reference Video : https://x.com/airinterface/status/1999982284267573456?s=20

Browsers and the GPU: Bringing 3D Visualization to You

Today’s web browsers can do amazing graphics. The new WebGPU API (built into Chrome, Edge, Firefox, Safari, etc.) lets a webpage tap into your computer’s graphics card to render 3D imagery very fast. This means complex volume renders – for example, looking at a 3D model of your brain or heart from your CT scan – can be done in your browser in real time. One recent article notes that WebGPU’s advanced texture support “is particularly valuable in fields such as … medical imaging”.

Image: Example of a 3D brain scan rendered interactively in a web browser using modern graphics APIs.

Imagine being able to rotate your own MRI on screen with a mouse, at full 3D depth, without waiting for a specialist. That’s now possible. Still, most patient portals only show static JPEGs or simple videos, not these rich interactive views. Once engineers build it, you could see slices of an MRI, adjust the contrast, or view a 3D reconstruction – all in any normal web browser without installing software. Browser-based viewers like OHIF already do this for doctors; patients just need access.

Rust and WebAssembly: Fast Computation in the Browser

Screenshot_2025-12-11_14-28-31.png

Beyond graphics, web computation has accelerated dramatically. Adobe’s work with WebAssembly proved that heavy desktop-grade workflows can run inside a browser. Modern systems languages like Rust, compiled to WASM, now let the browser execute complex image-processing tasks at near-native speed. This means the kinds of operations once limited to specialized radiology workstations — 3D segmentation, filtering, annotation — can now run directly on a patient’s device.

In practice, this is already happening: a CT or MRI can load instantly on a five-year-old laptop, with AI inference and GPU-powered rendering happening fully on the client side. All the heavy lifting — reconstruction, smoothing, measuring, reformatting — happens locally, avoiding delays from slow back-end servers.