Take Back the media from the devil.
π¬ Comments (0)
Login to add a comment
π¬ Comments (0)
Login to add a comment
π¬ Comments (0)
Login to add a comment
Scary
π¬ Comments (0)
Login to add a comment
https://x.com/mwal7777/status/1950903957045277117
π¬ Comments (0)
Login to add a comment
Yup!
π¬ Comments (0)
Login to add a comment
Building in-house AI infrastructure for real-world tools like CAD, CNC, and robotic automation is exactly where local AI shines. Even better that you're approaching it from the manufacturing side β way fewer people are doing that than chasing chatbots.
Since you're planning to dive in for real, a few things to keep on your radar for when you're ready:
π§ Local AI Server Setup (when the time comes)LLM Hosts: Look into Ollama, LM Studio, and [Text Generation WebUI] β good beginner-to-intermediate tools for running LLaMA/Mistral/etc. locally.
GPU Utilization: Your RTX 3060 is a capable 12GB card. Great for models up to 7B, and you can offload or quantize to go even further.
Open Source AI Toolkits: Check out:
ComfyUI (for AI image pipelines)
[AutoCAD LISP + Python bridges]
LangChain + [LLM Agents] for building tool-using AI systems
Router/SSL tips: For that whole SSL/router mess next time, you might want to:
Use Tailscale (for private networking)
Use Cloudflare Tunnel (to expose localhost without dealing with port forwarding)
Self-host reverse proxies like Caddy (auto HTTPS) or NGINX Proxy Manager
π¬ Comments (0)
Login to add a comment
CAS latency (CL) refers to how many cycles it takes for RAM to respond to a read request.
In DDR5, 6000 MT/s CL30 is much faster than 6000 CL36 or CL40, because it responds in fewer clock cycles.
But thereβs nuance:
True latency = (CL / Frequency) Γ 2000
So:
6000 MT/s CL30 β (30 Γ· 6000) Γ 2000 = 10 ns
6000 MT/s CL36 β 12 ns
5600 MT/s CL40 β ~14.3 ns
π CL30 at 6000 MT/s is ideal β you get tight timing and high bandwidth.
π¬ Comments (0)
Login to add a comment
π¬ Comments (0)
Login to add a comment


π¬ Comments (0)
Login to add a comment