Lately, IBM Exploration included a 3rd enhancement to the combination: parallel tensors. The most important bottleneck in AI inferencing is memory. Managing a 70-billion parameter product requires a minimum of 150 gigabytes of memory, approximately twice around a Nvidia A100 GPU holds.Company adoption of ML procedures throughout industries is rewor