spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
Frank
Member since: 2025-05-10
Frank
Frank 3d

7+ Main precision formats used in AI: Precision is very important in AI as it shapes how accurate and efficient models are. It controls how finely numbers are represented, approximating real-world values with formats like fixed-point and floating-point. A recent BF16 → FP16 study renewed attention to precision impact. Here are the main precision types used in AI, from full precision for training to ultra-low precision for inference: 1. FP32 (Float32): Standard full-precision float used in most training: 1 sign bit, 8 exponent bits, 23 mantissa bits. Default for backward-compatible training and baseline numerical stability 2. FP16 (Float16) → https://arxiv.org/abs/2305.10947v6 Half-precision float. It balances accuracy and efficiency. 1 sign bit, 5 exponent bits, 10 mantissa bits. Common on NVIDIA Tensor Cores and mixed-precision setups. There’s now a new wave of using it in reinforcement learning: https://www.turingpost.com/p/fp16 3. BF16 (BFloat16) → https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus Same dynamic range as FP32 but fewer mantissa bits: 1 sign bit, 8 exponent bits (same as FP32), 7 mantissa bits. It was developed by the research group Google Brain as part of their AI/ML infrastructure work at Google. Preferred on TPUs and modern GPUs 4. FP8 (E4M3 / E5M2) → https://proceedings.neurips.cc/paper_files/paper/2018/file/335d3d1cd7ef05ec77714a215134914c-Paper.pdf Emerging standard for training and inference on NVIDIA Hopper (H100) and Blackwell (B200) tensor cores and AMD MI300. Also supported in NVIDIA’s Transformer Engine: https://developer.nvidia.com/blog/floating-point-8-an-introduction-to-efficient-lower-precision-ai-training/ E4M3 = 4 exponent, 3 mantissa bits E5M2 = 5 exponent, 2 mantissa bits Read further below ⬇️ If you like this, also subscribe to the Turing post: https://www.turingpost.com/subscribe Kseniase 1 day ago FP4 → https://arxiv.org/abs/2310.16836 (4-bit Transformer); https://arxiv.org/abs/2305.14314 (QLoRA) Experimental format for ultra-compact inference. It's used in research and quantization-aware inference, including 4-Bit Floating-Point Quantized Transformers and 4-bit NormalFloat (NF4) in QLoRA INT8/INT4 → https://arxiv.org/abs/2004.09602 Integer low-precision formats that use 8 or 4 bits. Primary used in inference. The model's weights and activations are converted into integer values that can be processed efficiently on hardware optimized for integer arithmetic 2-bit (ternary or binary quantization) → https://research.ibm.com/blog/low-precision-computing Experimental ultra-low precision for computation in ultra-efficient AI accelerators. Uses values like {-1, 0, 1}. It turns multiplications into additions/subtractions - extremely cheap operations

Frank
Frank 3d

#ai #localllm #selfhostedai #quantization 7+ main precision formats used in AI ▪️ FP32 ▪️ FP16 ▪️ BF16 ▪️ FP8 (E4M3 / E5M2) ▪️ FP4 ▪️ INT8/INT4 ▪️ 2-bit (ternary/binary quantization) General trend: higher precision for training, lower precision for inference. Save the list and learn more about these formats here: huggingface.co/posts/Kseniase…

#ai #localllm #selfhostedai #quantization
Frank
Frank 4d

"And a cervix that can handle a hard pounding without filing too big of a complaint." (Just loud moans and whimpers of pleasure)

Frank
Frank 4d

Going to check out the TOR relays you use. Thanks for asking this question Xavier.

Frank
Frank 4d

#xmr #monero

#xmr #monero
Frank
Frank 6d

China saved opensource LLMs and not only that, it made the FRONTIER opensource between July 16th and today, these are the major releases: > Kimi-K2-Thinking (1T-A32B) > MiniMax M2 > DeepSeek V3.2 > GLM-4.6 (335B-A32B) > Qwen3-VL-30B-A3B (Instruct & Thinking) > Qwen3-VL-235B-A22B (Instruct & Thinking) > Qwen3-Next 80B-A3B (Instruct & Thinking) > GLM-4.5V (VLM, 106B-A12B) > DeepSeek V3.1 > Doubao 1.6-Vision (multimodal, tool-calling) > Doubao Translation 1.5 (ByteDance, 28 Languages) > ERNIE X1.1 (Baidu, Reasoning) > Hunyuan-MT-7B & Chimera-7B (Tencent Translation Specialists) > MiniCPM-V 4.5 (8B), Tiny but GPT-4o-level VLM > InternVL 3.5 (MASSIVE Multimodal Family of Models, 1B to 241B Sizes) > Step-3 (VLM, 321B/38B) > SenseNova V6.5 (SenseTime, Multimodal) > GLM-4.5 Air (Base & Instruct, 106B-A12B) > GLM-4.5 (Base & Instruct, 335B-A32B) > Qwen 3-Coder-30B-A3B (Instruct & Thinking) > Qwen3-Coder-480B-A35B (Instruct & Thinking) > Qwen3-30B-A3B-2507 (Instruct & Thinking) > Qwen3-235B-A22B-2507 (Instruct & Thinking) > Kimi K2 (1T-A32B) US & EU need to do better

Frank
Frank 6d

Everything is getting insanely expensive.

Frank
Frank 6d

Glad the connection is back. 💪😎🍺

Frank
Frank 6d

That’s true

Frank
Frank 6d

Nice! Good times!

Frank
Frank 6d

I massage and rub and caress the shit out of my controllers!

Welcome to Frank spacestr profile!

About Me

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends