Meta has announced the Meta LLM Compiler, a family of models built on Meta Code Llama with additional code optimization and compiler capabilities. These models can emulate the compiler, predict optimal passes for code size, and disassemble code. Notably, the Meta LLM Compiler beats GPT-4 on code size improvement and disassembly, achieving 77% of the optimizing potential of an autotuning search and 45% disassembly round trip. The models, which work with x_86 assembly and LLVM-IR, are available with 7B and 13B parameters and can be fine-tuned for new tasks. This release marks a significant advancement in AI-driven code optimization.
Tomorrow Meta LLM Compiler, Gemma2 With @picocreator https://t.co/yn6kRBQVIp
Meta releases LLM Compiler, a family of models built on Code Llama specifically designed for code optimization tasks, available with 7B and 13B parameters (@michaelfnunez / VentureBeat) https://t.co/RNhpSyKlPY 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/rjFhuRL7B5
Very exciting that this is out now (from my time at OpenAI): We trained an LLM critic to find bugs in code, and this helps humans find flaws on real-world production tasks that they would have missed otherwise. A promising sign for scalable oversight! https://t.co/e6CiXXoCeG https://t.co/EJ6OSfUN9p
Meta's LLM Compiler is the latest AI breakthrough to change the way we code https://t.co/33eVSCkVDf
Lots of good stuff today * CriticGPT: New OpenAI model that catches coding mistakes * LLM Compiler: New Meta model that works x_86 assembly <--> LLVM-IR * Gemma 2: New Google models that are mighty for the size with some great normalization tricks
BREAKING NEWS 🔥🔥 Meta LLM Compiler, a family of models built on Meta Code Llama with additional code optimization and compiler capabilities. These models can emulate the compiler, predict optimal passes for code size, and disassemble code. They can be fine-tuned for new… https://t.co/64R7sLNHSf
WAIT, it's not over; Meta just dropped the LLM Compiler! 🧑💻 > Beats GPT-4 on code size improvement and disassembly > Achieves 77% of the optimising potential of an autotuning search and 45% disassembly round trip > Built on top of CodeLLaMa with improved code optimisation and… https://t.co/xvjV9wgxql
Today we’re announcing Meta LLM Compiler, a family of models built on Meta Code Llama with additional code optimization and compiler capabilities. These models can emulate the compiler, predict optimal passes for code size, and disassemble code. They can be fine-tuned for new…