Serve generative AI models like T5 faster than ever with Friendli Inference (32.8x faster for T5–3B)