Similar Stories
Sources
- nat://TheAIObserverX
Depth Anything V2 ◼ Depth Anything V2 outshines V1 with crisper, sturdier depth predictions by using synthetic images, a beefed-up teacher model, and large-scale pseudo-labels. 📈 Plus, it's 10x faster than peers! A new benchmark is set for future research. 🛠️ #AI… https://t.co/ekqAWWDRoP
- fly51fly
[CV] Depth Anything V2 L Yang, B Kang, Z Huang, Z Zhao... [HKU & TikTok] (2024) https://t.co/JCVBqcDkm3 - Depth Anything V2 produces much more robust and fine-grained depth predictions than V1. This is achieved by three key modifications: 1) Replacing all labeled real images… https://t.co/Jd9M9HwK0V
- Xenova
Depth Anything V2 just released, enabling real-time depth estimation directly in your browser with 🤗 Transformers.js and WebGPU acceleration! ⚡️ The smallest model is only ~50MB (@ fp16), making it perfect for on-device usage! 😍 Check out the demo (+ source code) 👇 https://t.co/WKVqrxztRy
- Gradio
🔥 Depth Anything is back with version 2 - and is 10x faster than other current methods! 💪 Models of various sizes (from 25 million to 1.3 billion parameters) available on Huggingface Hub✨ Demo link & more😉👇 https://t.co/176haprazi
- Adina Yakup
Depth Anything 2 🔥 A monocular depth estimation model from HKU and TikTok 🚀 Model: https://t.co/USg2ffjERa Demo: https://t.co/wzUxYZo8M7 Paper:https://t.co/U531gTiyjv ✨ Enhancing depth prediction with synthetic images, larger teacher models, and pseudo-labeled real images.…
- Rohan Paul
Now our browsers can become the next main LLM OS to empower web agents with WebLLM🔥 The WebLLM is a high-performance in-browser LLM inference engine. Provides a specialized web backend of MLCEngine, and offering efficient LLM inference in the browser with local GPU… https://t.co/l1cxSTxlhw https://t.co/VXfsZnxCPz
- Andrei Bursuc
DepthAnythingV2 is up w/ code, ckpts and excellent performance. tl;dr pipeline: finetune DinoV2-G for depth estimation on synthetic data (595k images) -> use it as teacher to generate pseudo-labels on 62M real images-> train student model on pseudo-labels https://t.co/KQRyIdWRTe https://t.co/UuLAjt5uK8
- nat://TheAIObserverX
WebLLM: A High-Performance In-Browser LLM Inference Engine ◼ The new WebLLM engine enables high-performance, in-browser language models with local GPU acceleration. This allows private, zero-setup LLM inference directly in web browsers, paving the way for chat apps, text… https://t.co/9UH6fUcHpW
- Aran Komatsuzaki
TikTok presents Depth Anything V2 Trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation model proj: https://t.co/KaOQauiOST abs: https://t.co/9HxIpsPWJJ https://t.co/aj9S1SKjzN
- AK
Depth Anything V2 This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more https://t.co/s7TNDJIDvQ
- Tianqi Chen
Browsers have the potential to become the next main LLM OS to empower web agents. We are excited to announce WebLLM engine, a fast, private (full client-side computation) and convenient (zero environment setup) in-browser LLM inference engine to enable that. WebLLM offers… https://t.co/1LLenxvSba
- NeurAI Project / XNA
🤖 Neurai LLM Chat 🤖 We present an chat that uses the Llama3 and Phi3 models for offline use from the browser itself. ✨ Conditions for use: ☑️ Chromium based browser. ☑️ Dedicated GPU access. ☑️ 10Gb free space for the model. ✨ AI chat in future This chat will be the…
- Ruihang Lai
👉 Fast in-browser LLM inference accelerated by WebGPU, every computation runs locally 👉 Full OpenAI-compatible API 👉 Efficient JSON structured generation 👉 Built-in web worker support Come and check it out 👇 https://t.co/8cKdKY4RQJ
- Charlie Ruan
Excited to share WebLLM engine: a high-performance in-browser LLM inference engine! WebLLM offers local GPU acceleration via @WebGPU, fully OpenAI-compatible API, and built-in web workers support to separate backend executions. Check out the blog post: https://t.co/ScvWpoKQa5 https://t.co/2sBSDWFrRm