Apple's MLX model is being successfully run on Vision Pro devices, showcasing fast text generation capabilities. Ivan Fioravanti is noted for achieving this feat, with positive feedback on the speed and performance of the Vision Pro hardware. The use of MLX Swift LLMEval on the device is a significant advancement in local AI inference, with ongoing developments in the framework's tools.
Apple MLX + Vision Pro + Mistral 7B Q4 Let's push this to the limit! Fast and Furious! Look at the speed! ποΈ Vision Pro hardware is pretty amazing! Note: I tried with codellama-13b-q4, but it goes beyond the 5GB per app limit, no go, for the time being π https://t.co/T06yGTrHIo
Apple Cult Members: getting LLM text generation running on a Vision Pro. "Actually quite fast." https://t.co/x5a7iDKgBE
LLM text generation on a vision pro! Actually quite fast. I think @ivanfioravanti is making history as the first MLX model to run on that device. Guide: https://t.co/Qjo1DWwqfI https://t.co/PY6k8wGdDq
Side note macOS and Apple Silicon right now will win local AI inference. MLX is an amazing framework and the tools for it are being developed quickly. Almost a new major development for it daily. And not having to worry about GPU VRAM or CPU inference on system memory is⦠https://t.co/msHx17kUdu
Apple MLX on Vision Pro? YES YOU CAN! BOOM!!! Here the raw video of MLX Swift LLMEval example running natively on the device! Thanks @awnihannun π π₯π₯π₯ #VisionPro #LLM #Apple https://t.co/35T960KopQ