MedicalAI Observability Library API Reference
Table of Contents
API Overview
Image Format Support
Supported Image Formats by Modality
Universal Image Input Handlers
Modality-Specific Handlers
Model Prediction Format Support
Supported Model Output Formats by Task Type
Universal Model Output Handler
Task-Specific Output Handlers
Modality-Specific Metric Functions
CT-Specific Metrics
MRI-Specific Metrics
Ultrasound-Specific Metrics
X-ray-Specific Metrics
Mammography-Specific Metrics
Digital Pathology-Specific Metrics
PET/SPECT-Specific Metrics
OCT-Specific Metrics
Data Distribution Metrics
Privacy-Preserving Distribution Tracking
Model Performance Metrics
Classification Performance Metrics
Detection Performance Metrics
Segmentation Performance Metrics
Uncertainty Estimation
Cross-Task Consistency
Clinical Value Metrics
Privacy-Safe API Integration
Integration Approaches for Development and Production
Development Environment Integration
Production Environment Integration
Privacy and Security
Privacy-Preserving Metrics Collection
Secure Storage
Access Control
Configuration Reference
YAML Configuration Format
Environment Variables
Best Practices Guide
Integration Best Practices
Modality-Specific Recommendations
Task-Specific Recommendations
Version History and Roadmap
Version History
Roadmap
API Overview
This document provides a comprehensive reference for the MedicalAI Observability Library, detailing all classes, methods, parameters, return values, and supported data formats for medical imaging AI observability across development and production environments.
This reference is specifically designed to be implementable within a third-party workflow :
AI teams integrate their inference API into the thirdparty platform
The observability module is integrated
Provides observability dashboards to AI teams/companies
Image Format Support
Supported Image Formats by Modality
The library supports all major medical imaging formats across modalities. Below is a detailed reference of supported formats by modality and the specific conversion methods available.
CT
DICOM, NIFTI, RAW, MHD
kVp, mAs, CTDI, reconstruction kernel, slice thickness
handle_ct_input()
, ct_to_numpy()
, extract_ct_metadata()
MRI
DICOM, NIFTI, ANALYZE
TR, TE, field strength, sequence name, flip angle
handle_mri_input()
, mri_to_numpy()
, extract_mri_metadata()
Ultrasound
DICOM, RAW, MP4, NRRD
transducer frequency, probe type, depth, gain
handle_us_input()
, us_to_numpy()
, extract_us_metadata()
X-Ray
DICOM, JPEG, PNG
exposure index, detector type, grid ratio, SID
handle_xray_input()
, xray_to_numpy()
, extract_xray_metadata()
Mammography
DICOM, JPEG
compression, view type, detector exposure, breast thickness
handle_mammo_input()
, mammo_to_numpy()
, extract_mammo_metadata()
Digital Pathology
TIFF, SVS, NDPI
magnification, stain type, slide ID, tissue type
handle_pathology_input()
, pathology_to_numpy()
, extract_pathology_metadata()
PET/SPECT
DICOM, NIFTI
radiopharmaceutical, activity, half-life, acquisition type
handle_pet_input()
, pet_to_numpy()
, extract_pet_metadata()
OCT
DICOM, Proprietary
signal strength, axial resolution, scan pattern
handle_oct_input()
, oct_to_numpy()
, extract_oct_metadata()
Multi-modal
Any combination
Combined fields, registration parameters
handle_multimodal_input()
, multimodal_to_numpy()
, extract_multimodal_metadata()
Universal Image Input Handlers
The following core handlers process any imaging data regardless of modality:
Parameters:
image_input
(Any): Input image in any supported formatmodality
(str, optional): One of "CT", "MRI", "US", "XR", "MG", "PT", "OCT", "PATH", "MULTI"
Returns: Tuple[np.ndarray, Dict] containing pixel data and metadata
Exceptions:
UnsupportedFormatError
: If image format not recognizedProcessingError
: If image processing fails
Supported Input Formats:
NumPy Arrays:
2D arrays: Single-slice grayscale images (shape: H×W)
3D arrays: Volumetric data or RGB images (shapes: D×H×W or H×W×C)
4D arrays: Multi-channel volumes or temporal sequences (shapes: D×H×W×C or T×H×W×C)
Supported dtypes: uint8, uint16, int16, float32, float64
File Formats:
DICOM (.dcm): Raw or as pydicom.FileDataset
NIFTI (.nii, .nii.gz): Raw or as nibabel.Nifti1Image
TIFF (.tif, .tiff): Standard or BigTIFF
Slide formats: (.svs, .ndpi, .scn)
Common formats: (.png, .jpg, .jpeg)
Raw data: (.raw, .mhd/.mha pairs)
Videos: (.mp4, .avi) for ultrasound or fluoroscopy
Python Objects:
PyTorch tensors: torch.Tensor with NCHW or NDHWC format
TensorFlow tensors: tf.Tensor with NHWC or NDHWC format
SimpleITK Images: sitk.Image
PIL Images: PIL.Image.Image
Bytes/BytesIO: Raw binary data
File paths: String paths to image files
File-like objects: Objects with read() method
Structured Data:
Dictionaries:
{"pixel_array": array, "metadata": dict}
Lists of images:
[array1, array2, ...]
for batched processing
Modality-Specific Handlers
Each modality has specialized handler functions:
Parameters:
ct_input
(Any): CT scan in any supported format
Returns: Tuple[np.ndarray, Dict] containing standardized CT data and metadata
Supported CT-specific formats:
DICOM series with modality tag "CT"
NIFTI volumes with calibrated Hounsfield Units
Raw volumes with rescale parameters
Parameters:
mri_input
(Any): MRI scan in any supported format
Returns: Tuple[np.ndarray, Dict] containing standardized MRI data and metadata
Supported MRI-specific formats:
DICOM series with modality tag "MR"
NIFTI volumes with MRI-specific metadata
DICOM multi-echo or multi-contrast sequences
Parameters:
us_input
(Any): Ultrasound image or video in any supported format
Returns: Tuple[np.ndarray, Dict] containing standardized ultrasound data and metadata
Supported ultrasound-specific formats:
DICOM with modality tag "US"
Video formats (.mp4, .avi) for dynamic ultrasound
Color or grayscale Doppler images
Parameters:
xray_input
(Any): X-ray image in any supported format
Returns: Tuple[np.ndarray, Dict] containing standardized X-ray data and metadata
Supported X-ray-specific formats:
DICOM with modality tag "DX" or "CR"
Standard image formats with appropriate metadata
Exposure-corrected or raw detector readings
Parameters:
mammo_input
(Any): Mammography image in any supported format
Returns: Tuple[np.ndarray, Dict] containing standardized mammography data and metadata
Supported mammography-specific formats:
DICOM with modality tag "MG"
Standard image formats with appropriate metadata
Tomosynthesis image sets
Parameters:
pathology_input
(Any): Pathology image in any supported formatmagnification
(float, optional): Magnification level for multi-resolution images
Returns: Tuple[np.ndarray, Dict] containing standardized pathology data and metadata
Supported pathology-specific formats:
Whole slide images (.svs, .ndpi, .tiff)
Pyramidal TIFF formats
Multi-resolution tile-based formats
Parameters:
pet_input
(Any): PET or SPECT image in any supported format
Returns: Tuple[np.ndarray, Dict] containing standardized PET/SPECT data and metadata
Supported PET/SPECT-specific formats:
DICOM with modality tag "PT" or "NM"
NIFTI with appropriate quantitative units
SUV-calibrated or raw counts
Parameters:
oct_input
(Any): OCT image in any supported format
Returns: Tuple[np.ndarray, Dict] containing standardized OCT data and metadata
Supported OCT-specific formats:
DICOM with appropriate OCT-specific tags
Vendor-specific proprietary formats
B-scan or volume scans
Model Prediction Format Support
Supported Model Output Formats by Task Type
The library supports all common model output formats across different medical AI tasks. Each format is handled by specialized functions that extract relevant metrics.
Classification
Probabilities, class indices, medical scores (PI-RADS, BI-RADS, etc.)
Confidence distribution, uncertainty, calibration
handle_classification_output()
, analyze_classification_confidence()
Detection
Bounding boxes, heatmaps, coordinates (multiple formats)
Size distribution, location patterns, confidence thresholds
handle_detection_output()
, analyze_detection_characteristics()
Segmentation
Binary masks, multi-class masks, probability maps, RLE
Volume statistics, boundary smoothness, region properties
handle_segmentation_output()
, analyze_segmentation_characteristics()
Regression
Scalar values, measurement arrays, named measurements
Distribution, bias patterns, correlation with inputs
handle_regression_output()
, analyze_regression_characteristics()
Enhancement
Enhanced images, paired before/after, quality metrics
Quality improvement, artifact reduction, detail preservation
handle_enhancement_output()
, analyze_enhancement_characteristics()
Multi-task
Combined outputs from multiple tasks
Inter-task relationships, consistency across tasks
handle_multitask_output()
, analyze_multitask_consistency()
Universal Model Output Handler
Parameters:
output_data
(Any): Model output in any supported formatprediction_type
(str, optional): One of "classification", "detection", "segmentation", "regression", "enhancement", "multitask"
Returns: Dict containing standardized prediction representation
Exceptions:
UnsupportedFormatError
: If output format not recognizedValidationError
: If output data is invalid
Supported Output Formats:
Raw Numerical Formats:
NumPy arrays of various shapes depending on task
Python lists, tuples, or scalar values
PyTorch and TensorFlow tensors
Framework-Specific Formats:
PyTorch detection format:
{"boxes": tensor, "scores": tensor, "labels": tensor}
TensorFlow detection format:
{"detection_boxes": tensor, "detection_scores": tensor, "detection_classes": tensor}
COCO format:
[{"bbox": [x, y, w, h], "category_id": id, "score": score}, ...]
YOLO format:
[x_center, y_center, width, height, class_conf, class_pred]
Medical Domain Formats:
PI-RADS scores:
{"system": "PI-RADS", "score": 4, "confidence": 0.92}
BI-RADS assessment:
{"system": "BI-RADS", "category": "4A", "probability": 0.15}
TNM staging:
{"system": "TNM", "t": "T2", "n": "N0", "m": "M0", "confidence": 0.85}
Structured measurements:
{"measurements": [{"name": "volume", "value": 45.7, "unit": "cm³"}]}
Task-Specific Structured Formats:
See detailed formats in the task-specific handlers below
Task-Specific Output Handlers
Classification Output Handlers
Parameters:
classification_output
(Any): Classification output in any supported format
Returns: Dict containing standardized classification output
Supported Classification Formats:
Raw Probabilities:
NumPy array:
np.array([0.05, 0.85, 0.10])
(class probabilities)List/tuple:
[0.05, 0.85, 0.10]
(class probabilities)PyTorch tensor:
torch.tensor([[0.05, 0.85, 0.10]])
TensorFlow tensor:
tf.constant([[0.05, 0.85, 0.10]])
Class Index Format:
Single integer:
1
(predicted class index)With confidence:
(1, 0.85)
(class index, confidence)One-hot encoded:
[0, 1, 0]
Named Class Format:
Dictionary:
{"predicted_class": "pneumonia", "confidence": 0.85}
Named probabilities:
{"normal": 0.05, "pneumonia": 0.85, "covid": 0.10}
Multi-Label Format:
Dictionary:
{"mass": 0.85, "pneumothorax": 0.12, "effusion": 0.65}
Binary indicators:
{"mass": 1, "pneumothorax": 0, "effusion": 1}
Medical Scoring Systems:
PI-RADS:
{"system": "PI-RADS", "score": 4, "confidence": 0.92}
BI-RADS:
{"system": "BI-RADS", "category": "4A", "probability": 0.15}
ACR TI-RADS:
{"system": "TI-RADS", "score": 3, "confidence": 0.78}
Lung-RADS:
{"system": "Lung-RADS", "category": "3", "malignancy_probability": 0.15}
Detection Output Handlers
Parameters:
detection_output
(Any): Detection output in any supported format
Returns: Dict containing standardized detection output
Supported Detection Formats:
Bounding Box Formats:
List of [x1, y1, x2, y2, conf, class]:
[[100, 150, 250, 300, 0.9, 1], ...]
List of [x, y, w, h, conf, class]:
[[100, 150, 150, 150, 0.9, 1], ...]
List of [x_center, y_center, w, h, conf, class]:
[[175, 225, 150, 150, 0.9, 1], ...]
PyTorch Format:
Dictionary:
{"boxes": torch.tensor([[100, 150, 250, 300]]), "scores": torch.tensor([0.9]), "labels": torch.tensor([1])}
TensorFlow Format:
Dictionary:
{"detection_boxes": tf.constant([[0.1, 0.2, 0.3, 0.4]]), "detection_scores": tf.constant([0.9]), "detection_classes": tf.constant([1])}
COCO Format:
List of dictionaries:
[{"bbox": [100, 150, 150, 150], "category_id": 1, "score": 0.9}, ...]
Medical Detection Format:
Dictionary:
{"detections": [{"bbox": [100, 150, 250, 300], "class": "lesion", "confidence": 0.9, "attributes": {"malignancy": 0.7, "location": "peripheral_zone", "pi_rads_score": 4}}]}
Localization Formats:
Keypoints:
np.array([[120, 230, 0.95], [250, 180, 0.80]])
(x, y, confidence)Heatmap:
np.array(shape=(512, 512), dtype=float32)
(probability map)
Segmentation Output Handlers
Parameters:
segmentation_output
(Any): Segmentation output in any supported format
Returns: Dict containing standardized segmentation output
Supported Segmentation Formats:
Mask Formats:
Binary mask:
np.array(shape=(512, 512), dtype=bool)
(foreground/background)Multi-class mask:
np.array(shape=(512, 512), dtype=np.uint8)
(class indices)One-hot encoded:
np.array(shape=(num_classes, H, W), dtype=bool)
ornp.array(shape=(H, W, num_classes), dtype=bool)
Probability maps:
np.array(shape=(num_classes, H, W), dtype=np.float32)
ornp.array(shape=(H, W, num_classes), dtype=np.float32)
Instance Segmentation:
List of masks:
[mask1, mask2, mask3]
(one mask per instance)Instance IDs:
np.array(shape=(H, W), dtype=np.int32)
(pixel values = instance IDs)
Named Masks:
Dictionary:
{"tumor": tumor_mask, "organ": organ_mask}
Compressed Formats:
RLE:
{"counts": [49, 2, 18, 14...], "size": [512, 512]}
Boundary Representations:
Contours:
[np.array([[100, 100], [120, 100], [120, 120]...]), ...]
Surface mesh:
{"vertices": vertices_array, "faces": faces_array}
Medical Segmentation Format:
Dictionary:
{"segmentation": {"mask": mask_array, "classes": ["background", "prostate", "lesion"], "confidence_map": conf_array, "volumetric_statistics": {"total_volume_mm3": 45678.9, "class_volumes": {"prostate": 45000.0, "lesion": 678.9}}}}
Regression/Measurement Output Handlers
Parameters:
regression_output
(Any): Regression output in any supported format
Returns: Dict containing standardized regression output
Supported Regression Formats:
Scalar Formats:
Single value:
0.75
Multiple values:
[0.75, 12.3, 45.6]
Named Measurements:
Dictionary:
{"ejection_fraction": 0.65, "stroke_volume": 75.2}
With confidence:
{"value": 0.75, "confidence": 0.92, "range": [0.70, 0.80]}
Structured Measurements:
List of measurements:
{"measurements": [{"name": "tumor_size", "value": 15.2, "unit": "mm", "confidence": 0.9}]}
Medical Measurements:
Cardiac measurements:
{"measurements": [{"name": "ejection_fraction", "value": 0.65, "unit": "%", "reference_range": [0.55, 0.70]}]}
Tumor measurements:
{"measurements": [{"name": "long_axis", "value": 24.3, "unit": "mm"}, {"name": "short_axis", "value": 16.7, "unit": "mm"}]}
Organ volumes:
{"measurements": [{"name": "liver_volume", "value": 1520, "unit": "cm³", "reference_range": [1200, 1600]}]}
Time Series:
Waveform:
np.array(shape=(128), dtype=np.float32)
(e.g., for cardiac cycle analysis)
Enhancement/Reconstruction Output Handlers
Parameters:
enhancement_output
(Any): Enhancement output in any supported format
Returns: Dict containing standardized enhancement output
Supported Enhancement Formats:
Enhanced Image:
NumPy array:
np.array(shape=(512, 512), dtype=np.float32)
Before/After Pair:
Dictionary:
{"original": original_array, "enhanced": enhanced_array}
Multiple Reconstructions:
Dictionary:
{"standard": standard_array, "bone": bone_array, "soft_tissue": soft_tissue_array}
With Quality Metrics:
Dictionary:
{"image": enhanced_array, "metrics": {"psnr": 32.6, "ssim": 0.92}}
Medical Enhancement Formats:
Denoised MRI:
{"original": noisy_mri, "enhanced": denoised_mri, "metrics": {"snr_improvement": 6.8, "detail_preservation": 0.94}}
Super-resolution CT:
{"original": low_res_ct, "enhanced": high_res_ct, "metrics": {"resolution_scale": 3, "structure_preservation": 0.91}}
Artifact-corrected ultrasound:
{"original": artifact_us, "enhanced": corrected_us, "artifacts_removed": ["shadowing", "reverberation"]}
Modality-Specific Metric Functions
The library provides specialized metric functions for each imaging modality. These functions are designed to extract and analyze metrics that are uniquely relevant to each modality.
CT-Specific Metrics
Compute comprehensive metrics for CT images.
Parameters:
image
(np.ndarray): CT image datametadata
(Dict): CT metadata
Returns: Dict containing CT-specific metrics
Metrics computed:
HU statistics (mean, std, min, max, percentiles)
Noise index
Contrast-to-noise ratio
Spatial resolution estimate
Slice thickness verification
Reconstruction kernel characteristics
Dose metrics (if available in metadata)
Metal artifact detection and quantification
Beam hardening artifact detection
Motion artifact detection
Analyze Hounsfield Unit distribution in CT image.
Parameters:
ct_image
(np.ndarray): CT image data in HUroi
(np.ndarray or tuple, optional): Region of interest mask or coordinates
Returns: Dict containing HU distribution metrics
Detect common artifacts in CT images.
Parameters:
ct_image
(np.ndarray): CT image datametadata
(Dict): CT acquisition metadata
Returns: Dict containing detected artifacts and their severity
Compute noise characteristics in CT images.
Parameters:
ct_image
(np.ndarray): CT image databackground_roi
(np.ndarray or tuple, optional): Background region for noise analysis
Returns: Dict containing noise metrics
MRI-Specific Metrics
Compute comprehensive metrics for MRI images.
Parameters:
image
(np.ndarray): MRI image datametadata
(Dict): MRI metadata
Returns: Dict containing MRI-specific metrics
Metrics computed:
SNR (signal-to-noise ratio)
CNR (contrast-to-noise ratio)
Ghosting ratio
Image uniformity
Resolution and sharpness
Sequence-specific quality metrics
B0 field homogeneity estimation
Motion artifact detection
Specific MRI artifacts (aliasing, chemical shift, etc.)
Signal intensity characteristics
Compute signal-to-noise ratio in MRI images.
Parameters:
mri_image
(np.ndarray): MRI image datasignal_roi
(np.ndarray or tuple, optional): Signal regionnoise_roi
(np.ndarray or tuple, optional): Noise region
Returns: float representing SNR
Analyze MRI quality based on sequence type.
Parameters:
mri_image
(np.ndarray): MRI image datametadata
(Dict): MRI acquisition metadatasequence_type
(str, optional): One of "T1", "T2", "FLAIR", "DWI", "ADC", etc.
Returns: Dict containing sequence-specific quality metrics
Detect common artifacts in MRI images.
Parameters:
mri_image
(np.ndarray): MRI image datametadata
(Dict): MRI acquisition metadata
Returns: Dict containing detected artifacts and their severity
Ultrasound-Specific Metrics
Compute comprehensive metrics for ultrasound images.
Parameters:
image
(np.ndarray): Ultrasound image datametadata
(Dict): Ultrasound metadata
Returns: Dict containing ultrasound-specific metrics
Metrics computed:
Signal-to-noise ratio
Contrast ratio
Penetration depth
Resolution (axial, lateral)
Speckle characteristics
Cyst detection and analysis
Doppler quality metrics (if applicable)
Specific artifacts (shadowing, reverberation, enhancement)
Gain and dynamic range appropriateness
Compute ultrasound penetration depth.
Parameters:
us_image
(np.ndarray): Ultrasound image datametadata
(Dict): Ultrasound acquisition metadata
Returns: Dict containing penetration metrics
Analyze speckle characteristics in ultrasound image.
Parameters:
us_image
(np.ndarray): Ultrasound image dataroi
(np.ndarray or tuple, optional): Region for speckle analysis
Returns: Dict containing speckle metrics
Detect common artifacts in ultrasound images.
Parameters:
us_image
(np.ndarray): Ultrasound image data
Returns: Dict containing detected artifacts and their characteristics
Analyze quality of Doppler ultrasound images.
Parameters:
doppler_image
(np.ndarray): Doppler ultrasound imagemetadata
(Dict): Doppler acquisition metadata
Returns: Dict containing Doppler quality metrics
X-ray-Specific Metrics
Compute comprehensive metrics for X-ray images.
Parameters:
image
(np.ndarray): X-ray image datametadata
(Dict): X-ray metadata
Returns: Dict containing X-ray-specific metrics
Metrics computed:
Exposure index
Deviation index
Signal-to-noise ratio
Contrast
Dynamic range
Resolution and sharpness
Histogram analysis
Collimation quality
Patient positioning assessment
Specific artifacts (grid lines, foreign objects)
Compute exposure-related metrics for X-ray images.
Parameters:
xray_image
(np.ndarray): X-ray image datametadata
(Dict): X-ray acquisition metadata
Returns: Dict containing exposure metrics
Analyze patient positioning in X-ray images.
Parameters:
xray_image
(np.ndarray): X-ray image dataexam_type
(str, optional): Type of examination (e.g., "chest", "abdomen", "extremity")
Returns: Dict containing positioning quality metrics
Detect common artifacts in X-ray images.
Parameters:
xray_image
(np.ndarray): X-ray image datametadata
(Dict): X-ray acquisition metadata
Returns: Dict containing detected artifacts and their severity
Mammography-Specific Metrics
Compute comprehensive metrics for mammography images.
Parameters:
image
(np.ndarray): Mammography image datametadata
(Dict): Mammography metadata
Returns: Dict containing mammography-specific metrics
Metrics computed:
Exposure index
Contrast
Signal-to-noise ratio
Breast tissue coverage
Compression thickness
Positioning quality
Pectoral muscle visualization
Skin line visualization
Technical artifacts
MQSA compliance metrics
Analyze breast density in mammography images.
Parameters:
mammo_image
(np.ndarray): Mammography image datametadata
(Dict): Mammography acquisition metadata
Returns: Dict containing breast density metrics
Evaluate positioning quality in mammography.
Parameters:
mammo_image
(np.ndarray): Mammography image dataview
(str, optional): Mammographic view (e.g., "MLO", "CC", "ML", "LM")
Returns: Dict containing positioning quality metrics
Detect common artifacts in mammography images.
Parameters:
mammo_image
(np.ndarray): Mammography image datametadata
(Dict): Mammography acquisition metadata
Returns: Dict containing detected artifacts and their characteristics
Digital Pathology-Specific Metrics
Compute comprehensive metrics for digital pathology images.
Parameters:
image
(np.ndarray): Pathology image datametadata
(Dict): Pathology metadatastain_type
(str, optional): Stain type (e.g., "H&E", "IHC", "special")
Returns: Dict containing pathology-specific metrics
Metrics computed:
Focus quality
Color consistency
Stain quality and normalization
Tissue coverage
Scanning artifacts
Tissue fold detection
Air bubble detection
Pen marks detection
Color balance and white balance
Section thickness consistency
Analyze stain quality in pathology images.
Parameters:
path_image
(np.ndarray): Pathology image datastain_type
(str): Stain type
Returns: Dict containing stain quality metrics
Analyze focus quality in pathology images.
Parameters:
path_image
(np.ndarray): Pathology image datatile_size
(int, optional): Size of tiles for localized focus analysis
Returns: Dict containing focus quality metrics
Detect common artifacts in pathology images.
Parameters:
path_image
(np.ndarray): Pathology image data
Returns: Dict containing detected artifacts and their severity
PET/SPECT-Specific Metrics
Compute comprehensive metrics for PET/SPECT images.
Parameters:
image
(np.ndarray): PET/SPECT image datametadata
(Dict): PET/SPECT metadata
Returns: Dict containing PET/SPECT-specific metrics
Metrics computed:
SUV calibration verification
Noise equivalent count rate
Uniformity
Reconstruction quality
Resolution
Motion artifacts
Attenuation correction quality
Registration quality (for hybrid imaging)
Specific artifacts (attenuation, scatter, randoms)
Quantitative accuracy metrics
Analyze SUV calibration in PET images.
Parameters:
pet_image
(np.ndarray): PET image datametadata
(Dict): PET acquisition metadata
Returns: Dict containing SUV calibration metrics
Compute uniformity in PET background regions.
Parameters:
pet_image
(np.ndarray): PET image databackground_roi
(np.ndarray or tuple, optional): Background region
Returns: Dict containing uniformity metrics
Detect common artifacts in PET images.
Parameters:
pet_image
(np.ndarray): PET image datametadata
(Dict): PET acquisition metadata
Returns: Dict containing detected artifacts and their characteristics
OCT-Specific Metrics
Compute comprehensive metrics for OCT images.
Parameters:
image
(np.ndarray): OCT image datametadata
(Dict): OCT metadata
Returns: Dict containing OCT-specific metrics
Metrics computed:
Signal strength/quality
Signal-to-noise ratio
Axial resolution
Lateral resolution
Depth penetration
Motion artifacts
Segmentation quality estimation
Layer visibility
Specific artifacts (shadowing, mirror artifacts)
Signal attenuation characteristics
Analyze signal quality in OCT images.
Parameters:
oct_image
(np.ndarray): OCT image data
Returns: Dict containing signal quality metrics
Estimate resolution in OCT images.
Parameters:
oct_image
(np.ndarray): OCT image datametadata
(Dict): OCT acquisition metadata
Returns: Dict containing resolution metrics
Detect common artifacts in OCT images.
Parameters:
oct_image
(np.ndarray): OCT image data
Returns: Dict containing detected artifacts and their severity
Data Distribution Metrics
Functions for tracking input data distributions and detecting shifts in deployment.
Track distribution of a specific metadata field across multiple images.
Parameters:
metadata_entries
(List[Dict]): List of metadata dictionaries from multiple imagesfield_name
(str): Name of metadata field to tracksketch_method
(str, optional): Method for distribution tracking ("exact", "kll", "count_min", "none")
Returns: Dict containing distribution statistics
Example fields for tracking:
Scanner manufacturers/models
Protocol names
kVp settings (CT)
TE/TR values (MRI)
Field strengths (MRI)
Slice thickness
Reconstruction kernels
Detector exposure settings
Study descriptions
Patient positioning
Track distribution of scanner manufacturers and models.
Parameters:
metadata_entries
(List[Dict]): List of metadata dictionariessketch_method
(str, optional): Method for distribution tracking
Returns: Dict containing scanner distribution statistics
Track distribution of imaging protocols.
Parameters:
metadata_entries
(List[Dict]): List of metadata dictionariessketch_method
(str, optional): Method for distribution tracking
Returns: Dict containing protocol distribution statistics
Analyze distribution of image dimensions.
Parameters:
images
(List[np.ndarray]): List of image arraysmetadata_entries
(List[Dict], optional): List of metadata dictionaries for pixel spacing
Returns: Dict containing dimension statistics
Analyze distribution of pixel intensities.
Parameters:
images
(List[np.ndarray]): List of image arraysmodality
(str, optional): Imaging modality for scaling/windowingsketch_method
(str, optional): Method for distribution tracking
Returns: Dict containing intensity distribution statistics
Detect drift between current and baseline distributions.
Parameters:
current_distribution
(Dict): Current distribution statisticsbaseline_distribution
(Dict): Baseline distribution statisticsfield_name
(str, optional): Field name for specific distribution
Returns: Dict containing drift metrics
Metrics computed:
Drift magnitude
Statistical significance
Maximum divergence
Distribution comparison statistics (KL divergence, Wasserstein distance, etc.)
Top contributors to drift
Visualization data
Compute distance between two distributions.
Parameters:
dist1
(Dict or np.ndarray): First distributiondist2
(Dict or np.ndarray): Second distributionmethod
(str, optional): Distance method ("kl_divergence", "wasserstein", "js_divergence", "earth_movers")
Returns: float representing distance between distributions
Generate visualization data for distribution drift.
Parameters:
current_distribution
(Dict): Current distribution statisticsbaseline_distribution
(Dict): Baseline distribution statisticsfield_name
(str): Field name for specific distribution
Returns: Dict containing visualization data
Privacy-Preserving Distribution Tracking
The library provides privacy-preserving methods for tracking distributions without storing raw data.
KLL Sketch for Continuous Distributions
Parameters:
k
(int): Size parameter controlling accuracy/memory tradeoffepsilon
(float): Error boundis_float
(bool): Whether values are floating point (True) or integer (False)
Methods:
update(value)
: Add a value to the sketchmerge(other_sketch)
: Merge with another sketchget_quantile(q)
: Get quantile value (0 ≤ q ≤ 1)get_quantiles(qs)
: Get multiple quantile valuesget_rank(value)
: Get rank of a value (0 to 1)get_min_value()
: Get minimum valueget_max_value()
: Get maximum valueget_num_items()
: Get number of itemsto_bytes()
: Serialize sketch to bytesfrom_bytes(data)
: Deserialize sketch from bytes (class method)
Example uses:
SNR distribution across studies
Confidence score distribution
Lesion size/volume distribution
Pixel intensity distribution
Processing time distribution
CountMinSketch for Categorical Distributions
Parameters:
width
(int): Width of sketch (larger = more accurate)depth
(int): Depth of sketch (more hash functions = fewer collisions)seed
(int, optional): Random seed for hash functions
Methods:
update(item, count=1)
: Add an item to the sketch with specified countmerge(other_sketch)
: Merge with another sketchestimate_count(item)
: Estimate item countestimate_frequency(item)
: Estimate item frequency (0 to 1)get_heavy_hitters(threshold)
: Get frequent items above thresholdget_total_count()
: Get total count of all itemsto_bytes()
: Serialize sketch to bytesfrom_bytes(data)
: Deserialize sketch from bytes (class method)
Example uses:
Scanner manufacturer distribution
Protocol name distribution
Artifact type distribution
Error type distribution
Diagnosis code distribution
HyperLogLog for Cardinality Estimation
Parameters:
precision
(int): Precision parameter (4-16, higher = more accurate)
Methods:
update(item)
: Add an item to the sketchmerge(other_sketch)
: Merge with another sketchget_cardinality()
: Estimate unique item countto_bytes()
: Serialize sketch to bytesfrom_bytes(data)
: Deserialize sketch from bytes (class method)
Example uses:
Count of unique study descriptions
Count of unique protocols
Count of unique scanners
Count of unique error messages
Count of unique patients (with appropriate anonymization) file
api_key
(str, optional): API keyendpoint
(str, optional): API endpoint
Returns: Dict containing API response
Exceptions:
APIError
: If API call fails
Model Performance Metrics
Functions for analyzing model performance without ground truth labels.
Classification Performance Metrics
Analyze confidence distribution and uncertainty for classification outputs.
Parameters:
classification_outputs
(List[Dict]): List of standardized classification outputsthreshold
(float, optional): Classification threshold
Returns: Dict containing confidence metrics
Metrics computed:
Confidence distribution statistics (mean, median, std, min, max)
Entropy of predictions
Calibration metrics
Uncertainty estimates
Prediction stability
Threshold analysis
Class distribution analysis
Analyze calibration of classification probabilities.
Parameters:
probabilities
(List[np.ndarray]): List of probability distributionsbins
(int, optional): Number of bins for reliability diagram
Returns: Dict containing calibration metrics
Detect anomalous predictions based on confidence patterns.
Parameters:
classification_outputs
(List[Dict]): List of standardized classification outputsbaseline_stats
(Dict, optional): Baseline statistics for comparison
Returns: Dict containing anomaly metrics
Detection Performance Metrics
Analyze statistical properties of detection outputs.
Parameters:
detection_outputs
(List[Dict]): List of standardized detection outputs
Returns: Dict containing detection metrics
Metrics computed:
Bounding box size distribution (area, aspect ratio)
Spatial distribution analysis
Confidence distribution
Number of detections per image
Detection clustering analysis
Consistency across similar inputs
Overlap analysis
Analyze spatial distribution of detections.
Parameters:
detection_outputs
(List[Dict]): List of standardized detection outputsimage_shapes
(List[Tuple], optional): Shapes of corresponding images
Returns: Dict containing spatial distribution metrics
Measure stability of detections across a series of similar inputs.
Parameters:
detection_outputs_series
(List[List[Dict]]): Series of detection outputs on similar inputs
Returns: Dict containing stability metrics
Segmentation Performance Metrics
Analyze statistical properties of segmentation outputs.
Parameters:
segmentation_outputs
(List[Dict]): List of standardized segmentation outputs
Returns: Dict containing segmentation metrics
Metrics computed:
Volume/area distribution
Shape analysis (compactness, elongation, etc.)
Boundary smoothness
Confidence distribution (for probability maps)
Topology analysis
Region properties
Class distribution (for multi-class segmentation)
Connected component analysis
Analyze boundary characteristics of segmentation mask.
Parameters:
segmentation_mask
(np.ndarray): Segmentation mask
Returns: Dict containing boundary metrics
Measure stability of segmentations across a series of similar inputs.
Parameters:
segmentation_outputs_series
(List[Dict]): Series of segmentation outputs on similar inputs
Returns: Dict containing stability metrics
Uncertainty Estimation
Estimate uncertainty using Monte Carlo dropout.
Parameters:
model_func
(Callable): Function that runs inference with dropout enabledinput_data
(Any): Input data for inferencenum_samples
(int, optional): Number of forward passes**kwargs
: Additional arguments for model_func
Returns: Dict containing uncertainty metrics
Estimate uncertainty from ensemble predictions.
Parameters:
predictions
(List[Dict]): Predictions from ensemble members
Returns: Dict containing uncertainty metrics
Decompose predictive uncertainty into aleatoric and epistemic components.
Parameters:
predictions
(List[Dict]): Multiple predictions (from ensemble or MC dropout)
Returns: Dict containing uncertainty decomposition
Cross-Task Consistency
Analyze consistency between different task outputs on the same inputs.
Parameters:
classification_outputs
(List[Dict]): Classification outputsdetection_outputs
(List[Dict]): Detection outputssegmentation_outputs
(List[Dict]): Segmentation outputs
Returns: Dict containing consistency metrics
Clinical Value Metrics
Functions for evaluating the clinical utility of AI outputs.
Analyze patterns in user feedback.
Parameters:
feedback_entries
(List[Dict]): User feedback entries
Returns: Dict containing feedback analysis
Metrics computed:
Overall rating statistics
Rating distribution by modality/task
Common issue categories
Acceptance rate
Modification patterns
Time savings estimation
User satisfaction trends
Free text comment analysis
Analyze result acceptance patterns.
Parameters:
feedback_entries
(List[Dict]): User feedback entriessegment_by
(str, optional): Field to segment by (e.g., "modality", "user_role")
Returns: Dict containing acceptance metrics
Analyze how clinicians modify AI outputs.
Parameters:
original_outputs
(List[Dict]): Original AI outputsmodified_outputs
(List[Dict]): Clinician-modified outputs
Returns: Dict containing modification pattern metrics
Estimate time saved through AI assistance.
Parameters:
feedback_entries
(List[Dict]): User feedback entries with timing information
Returns: Dict containing time saving metrics
Analyze clinician confidence in AI outputs.
Parameters:
feedback_entries
(List[Dict]): User feedback entries
Returns: Dict containing clinical confidence metrics
Detect trends in user feedback over time.
Parameters:
feedback_entries
(List[Dict]): User feedback entriestime_window
(str, optional): Time window for trend analysis (e.g., "day", "week", "month")
Returns: Dict containing trend metrics
Integration with third party Platform
The library provides specialized components for integrating with the third party platform, facilitating the third-party workflow where:
AI teams integrate their inference API into Thirdpar
Third party provides the PACS software interface to hospitals
The observability module is integrated by third party
Third party provides dashboards to AI teams
ThirdpartyObservabilityMiddleware
Parameters:
config_path
(str, optional): Path to configuration filevendor_id
(str, optional): Vendor ID for AI modelmodel_id
(str, optional): Model ID for trackingprivacy_level
(str, optional): Privacy strictness level ("standard", "high", "extreme")
Methods:
Hook to be called before inference by Thirdparty.
Parameters:
dicom_data
(Any): Input DICOM data in any supported formatmodel_id
(str, optional): Model ID if different from initialization
Returns: Dict containing input metrics for logging (PHI-free)
Hook to be called after inference byThirdparty.
Parameters:
prediction
(Any): Model prediction in any supported formatmodel_id
(str, optional): Model ID if different from initializationinference_time
(float, optional): Inference time in secondsinput_metadata
(Dict, optional): Additional input metadata
Returns: Dict containing output metrics for logging (PHI-free)
Hook to be called when user feedback is received.
Parameters:
feedback_data
(Dict): User feedback in any supported formatmodel_id
(str, optional): Model ID if different from initialization
Returns: Dict containing feedback metrics for logging (PHI-free)
Generate aggregated metrics report for the vendor/model.
Parameters:
time_range
(Tuple[datetime, datetime], optional): Time range for reportmetrics_filter
(Dict, optional): Filter for specific metrics
Returns: Dict containing aggregated metrics (PHI-free)
Configuration Management for Thirdparty.
Push observability configuration to Thirdparty.ai platform.
Parameters:
config_path
(str): Path to configuration fileapi_key
(str, optional): Thirdpartyai API keyendpoint
(str, optional): Thirdpartyai API endpoint
Returns: Dict containing API response
Exceptions:
APIError
: If API call failsConfigurationError
: If configuration is invalidAuthenticationError
: If authentication fails
Get observability configuration from Carpl.ai platform.
Parameters:
vendor_id
(str): Vendor IDmodel_id
(str): Model IDapi_key
(str, optional): Carpl.ai API keyendpoint
(str, optional): Carpl.ai API endpoint
Returns: Dict containing configuration
Exceptions:
APIError
: If API call failsAuthenticationError
: If authentication failsNotFoundError
: If configuration not found
Privacy-Safe API Integration
Parameters:
api_key
(str, optional): Carpl.ai API keyapi_endpoint
(str, optional): Carpl.ai API endpoint
Methods:
Send metrics to Carpl.ai platform.
Parameters:
metrics_data
(Dict): Metrics data (must be PHI-free)vendor_id
(str): Vendor IDmodel_id
(str): Model ID
Returns: Dict containing API response
Get metrics from Carpl.ai platform.
Parameters:
vendor_id
(str): Vendor IDmodel_id
(str): Model IDtime_range
(Tuple[datetime, datetime], optional): Time range for metricsmetrics_filter
(Dict, optional): Filter for specific metrics
Returns: Dict containing metrics
Get URL for Carpl.ai metrics dashboard.
Parameters:
vendor_id
(str): Vendor IDmodel_id
(str): Model IDdashboard_type
(str, optional): Dashboard type
Returns: str containing dashboard URL
Integration Approaches for Development and Production
The library provides multiple integration approaches to support both development and production environments in the medical AI workflow.
Development Environment Integration
Training Integration with PyTorch
Parameters:
config_path
(str, optional): Path to configuration filelog_dir
(str, optional): Directory for logging training metrics
Methods:
Create a PyTorch Lightning callback for observability.
Returns: ObservabilityCallback instance for Lightning
Hook a PyTorch model for observability.
Parameters:
model
(torch.nn.Module): PyTorch model
Returns: Hooked model
Track a training/validation batch.
Parameters:
inputs
(Any): Batch inputsoutputs
(Any): Model outputstargets
(Any, optional): Ground truth targetsloss
(float, optional): Loss valuebatch_idx
(int, optional): Batch indexepoch
(int, optional): Epoch number
Returns: Dict containing batch metrics
Track epoch-level metrics.
Parameters:
epoch
(int): Epoch numbermetrics
(Dict): Epoch metricsphase
(str, optional): "train", "validation", or "test"
Returns: Dict containing epoch metrics
Run and track validation.
Parameters:
model
(torch.nn.Module): PyTorch modeldataloader
(torch.utils.data.DataLoader): Validation dataloaderdevice
(str, optional): Device for validationmetrics_fn
(callable, optional): Function to compute metrics
Returns: Dict containing validation metrics
Training Integration with TensorFlow/Keras
Parameters:
config_path
(str, optional): Path to configuration filelog_dir
(str, optional): Directory for logging training metrics
Methods:
Create a Keras callback for observability.
Returns: KerasCallback instance for Keras
Track a training/validation batch.
Parameters:
inputs
(Any): Batch inputsoutputs
(Any): Model outputstargets
(Any, optional): Ground truth targetsloss
(float, optional): Loss valuebatch_idx
(int, optional): Batch indexepoch
(int, optional): Epoch number
Returns: Dict containing batch metrics
Track epoch-level metrics.
Parameters:
epoch
(int): Epoch numbermetrics
(Dict): Epoch metricsphase
(str, optional): "train", "validation", or "test"
Returns: Dict containing epoch metrics
Inference Pipeline Integration
Parameters:
model
(Callable): Model function or object with call methodpreprocessor
(Callable, optional): Preprocessing functionpostprocessor
(Callable, optional): Postprocessing functionobserver
(ObservabilityClient, optional): Observability client
Methods:
Run inference with observability.
Parameters:
input_data
(Any): Input data in any supported format**kwargs
: Additional arguments for model
Returns: Model prediction
Run batch inference with observability.
Parameters:
input_batch
(List[Any]): Batch of input data**kwargs
: Additional arguments for model
Returns: List of model predictions
Production Environment Integration
Web Service Integration
Create FastAPI middleware for observability.
Parameters:
observer
(ObservabilityClient): Observability client
Returns: FastAPI middleware class
Create Flask middleware for observability.
Parameters:
observer
(ObservabilityClient): Observability client
Returns: Flask middleware function
Parameters:
model_path
(str): Path to saved modelconfig_path
(str, optional): Path to configuration fileframework
(str, optional): "pytorch", "tensorflow", "onnx", or "auto"host
(str, optional): Host for servingport
(int, optional): Port for serving
Methods:
Start the model serving application with observability.
Returns: None
Docker Container Integration
Build Docker container with model and observability.
Parameters:
model_path
(str): Path to saved modelconfig_path
(str): Path to configuration fileoutput_path
(str): Path for output containerbase_image
(str, optional): Base Docker image
Returns: str containing container ID
DICOM Integration
Parameters:
aetitle
(str): AE title for DICOM nodeport
(int): Port for DICOM nodeconfig_path
(str, optional): Path to configuration file
Methods:
Start DICOM node with observability.
Parameters:
model_path
(str): Path to saved modeloutput_directory
(str): Directory for output DICOM files
Returns: None
Third-Party Integration with Carpl.ai
For AI Model Developers
Parameters:
api_key
(str, optional): Carpl.ai API keyconfig_path
(str, optional): Path to configuration file
Methods:
Configure observability for model on Carpl.ai platform.
Parameters:
model_id
(str): Model IDprivacy_level
(str, optional): Privacy level ("standard", "high", "extreme")
Returns: Dict containing configuration status
Get metrics dashboard URL for model.
Parameters:
model_id
(str): Model IDtime_range
(Tuple[datetime, datetime], optional): Time range for metrics
Returns: str containing dashboard URL
Download metrics for model.
Parameters:
model_id
(str): Model IDtime_range
(Tuple[datetime, datetime], optional): Time range for metricsformat
(str, optional): "json", "csv", or "parquet"
Returns: Dict or bytes containing metrics data
For Carpl.ai Platform Integration
Parameters:
config_path
(str, optional): Path to configuration file
Methods:
Initialize vendor in observability system.
Parameters:
vendor_id
(str): Vendor IDvendor_config
(Dict, optional): Vendor configuration
Returns: Dict containing initialization status
Register model in observability system.
Parameters:
vendor_id
(str): Vendor IDmodel_id
(str): Model IDmodel_type
(str): Model typemodel_version
(str, optional): Model version
Returns: Dict containing registration status
Track inference for vendor/model.
Parameters:
vendor_id
(str): Vendor IDmodel_id
(str): Model IDdicom_data
(Any): Input DICOM dataprediction
(Any): Model predictionfeedback
(Dict, optional): User feedback
Returns: Dict containing tracking status
Generate vendor-specific report.
Parameters:
vendor_id
(str): Vendor IDmodel_id
(str, optional): Model ID (if None, report for all models)time_range
(Tuple[datetime, datetime], optional): Time range for report
Returns: Dict containing report data
Privacy and Security
The library provides specialized components for ensuring privacy and security in healthcare settings, particularly important when deployed through platforms like Carpl.ai.
Privacy-Preserving Metrics Collection
Parameters:
privacy_level
(str): Privacy strictness level ("standard", "high", "extreme")config
(Dict, optional): Privacy configuration
Methods:
Process DICOM metadata to remove/transform PHI.
Parameters:
dicom_metadata
(Dict): DICOM metadata
Returns: Dict containing privacy-safe metadata
Process image data to remove identifiable information.
Parameters:
image_data
(np.ndarray): Image data
Returns: Privacy-safe derived metrics (not the image itself)
Apply differential privacy to aggregated metrics.
Parameters:
metrics
(Dict): Metrics to protectepsilon
(float, optional): Privacy parameterdelta
(float, optional): Privacy parameter
Returns: Dict containing privacy-protected metrics
Check if data contains potential PHI leakage.
Parameters:
data
(Any): Data to check
Returns: Tuple[bool, List[str]] containing leakage status and fields
Secure Storage
Parameters:
storage_path
(str): Path to secure storageencryption_key
(str, optional): Encryption key
Methods:
Securely store data.
Parameters:
data
(Any): Data to storedata_id
(str): Identifier for data
Returns: bool indicating success
Securely retrieve data.
Parameters:
data_id
(str): Identifier for data
Returns: Retrieved data
Securely delete data.
Parameters:
data_id
(str): Identifier for data
Returns: bool indicating success
Access Control
Parameters:
config
(Dict, optional): Access control configuration
Methods:
Validate user access to resource.
Parameters:
user_id
(str): User identifierresource_id
(str): Resource identifieraccess_level
(str): Requested access level
Returns: bool indicating access granted
Create access token for resources.
Parameters:
user_id
(str): User identifierresource_ids
(List[str]): Resource identifiersaccess_level
(str): Access levelexpiry
(datetime, optional): Token expiry
Returns: str containing access token
Validate access token.
Parameters:
token
(str): Access token
Returns: Dict containing validation result
Configuration Reference
YAML Configuration Format
The library uses YAML configuration files to control its behavior. Below is a reference of the configuration format.
Environment Variables
The library supports configuration through environment variables:
MEDICAL_AI_OBSERVE_CONFIG
: Path to configuration fileMEDICAL_AI_OBSERVE_ENV
: Environment ("development", "production")MEDICAL_AI_OBSERVE_LOG_LEVEL
: Log levelMEDICAL_AI_OBSERVE_PRIVACY_LEVEL
: Privacy levelMEDICAL_AI_OBSERVE_STORAGE_PATH
: Path for local storageMEDICAL_AI_OBSERVE_API_ENDPOINT
: API endpoint for remote storageMEDICAL_AI_OBSERVE_API_KEY
: API key for remote storageMEDICAL_AI_OBSERVE_VENDOR_ID
: Vendor ID for Carpl.ai integrationMEDICAL_AI_OBSERVE_MODEL_ID
: Model ID for Carpl.ai integration
Best Practices Guide
Integration Best Practices
Start Simple
Begin with basic metrics before enabling advanced features
Add modality-specific metrics incrementally
Test with a limited set of studies before full deployment
Privacy Considerations
Set privacy level to "high" or "extreme" for clinical deployments
Use data sketching for distributional data
Apply differential privacy for sensitive count metrics
Never transmit PHI or PII to third-party services
Performance Optimization
Enable batching for high-volume deployments
Use asynchronous processing for non-blocking operation
Set appropriate buffer sizes based on deployment scale
Configure appropriate retention policies for local storage
Integration with Carpl.ai
Use the CarplObservabilityMiddleware for seamless integration
Configure vendor-specific metrics dashboards
Implement proper access controls for metrics visibility
Test integration thoroughly before production deployment
Modality-Specific Recommendations
CT Imaging
Track HU calibration and consistency
Monitor noise index across studies
Track dose metrics when available
MRI Imaging
Monitor sequence-specific quality metrics
Track artifact prevalence by sequence type
Track protocol parameter distributions
Ultrasound Imaging
Monitor gain settings and dynamic range
Track penetration depth and speckle characteristics
Monitor artifact prevalence (shadowing, enhancement)
X-ray Imaging
Track exposure index and deviation index
Monitor positioning quality metrics
Track artifact prevalence (grid lines, foreign objects)
Mammography
Track compression and positioning metrics
Monitor breast density distribution
Track technical recall rates
Digital Pathology
Monitor focus quality across slides
Track stain characteristics and normalization
Monitor scanning artifacts
Task-Specific Recommendations
Classification Models
Track confidence distribution by class
Monitor calibration metrics
Track entropy of predictions
Detection Models
Track size and location distribution of detections
Monitor confidence distribution
Track number of detections per image
Segmentation Models
Track volume/area distribution
Monitor boundary smoothness
Track topology consistency
Multi-Task Models
Track consistency between tasks
Monitor task-specific metrics
Track performance correlations between tasks
Version History and Roadmap
Version History
v1.0.0 (2024-05-18)
Initial public release
Support for all major imaging modalities
Basic metrics for image quality and model performance
Integration with Carpl.ai platform
Privacy-preserving analytics
v0.9.0 (2024-04-15)
Beta release with limited feature set
Core functionality for CT, MRI, and X-ray
Basic privacy-preserving analytics
Roadmap
Q3 2024
Advanced uncertainty metrics
Expanded modality-specific metrics
Enhanced visualization capabilities
Q4 2024
Federated analytics capabilities
Advanced clinical impact metrics
Expanded API integrations
Q1 2025
Multi-vendor comparative analytics
Automated insight generation
Predictive maintenance features
Q2 2025
Regulatory compliance reporting
Advanced anomaly detection
Continuous learning framework
Table of Contents
API Overview
Image Format Support
Supported Image Formats by Modality
Universal Image Input Handlers
Modality-Specific Handlers
Model Prediction Format Support
Supported Model Output Formats by Task Type
Universal Model Output Handler
Task-Specific Output Handlers
Modality-Specific Metric Functions
CT-Specific Metrics
MRI-Specific Metrics
Ultrasound-Specific Metrics
X-ray-Specific Metrics
Mammography-Specific Metrics
Digital Pathology-Specific Metrics
PET/SPECT-Specific Metrics
OCT-Specific Metrics
Data Distribution Metrics
Privacy-Preserving Distribution Tracking
Model Performance Metrics
Classification Performance Metrics
Detection Performance Metrics
Segmentation Performance Metrics
Uncertainty Estimation
Cross-Task Consistency
Clinical Value Metrics
Integration with Carpl.ai Platform
CarplObservabilityMiddleware
Configuration Management for Carpl.ai
Privacy-Safe API Integration
Integration Approaches for Development and Production
Development Environment Integration
Production Environment Integration
Third-Party Integration with Carpl.ai
Privacy and Security
Privacy-Preserving Metrics Collection
Secure Storage
Access Control
Configuration Reference
YAML Configuration Format
Environment Variables
Best Practices Guide
Integration Best Practices
Modality-Specific Recommendations
Task-Specific Recommendations
Version History and Roadmap
Version History
Roadmap
Last updated