Same here,. Iβm way more productive with claude code but itβs important to experiment with the lobster to understand what is coming.
π This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.
Edit
Same here,. Iβm way more productive with claude code but itβs important to experiment with the lobster to understand what is coming.
π
imho openclaw is the future, but there are still a lot frictions. I'm way more productive with claude code. but it's important to experiment with it to understand what's coming. qwen3.5:35b is frankly quite good, though i find it, and its companion 9b, to overthink. There's a lot of unproductive inner talk. But in comparison with previous open source LLMs that were failing agentic usage with openclaw, qwen3.5:35b is quite consistent and get the job done as long as you keep sessions short and set thinking on "high". DGX Spark is good at inference, though I'm not blown away. I think the real trick is to split inference into prefill and decode, performing prefill on the dgx and decode on a mac studio. I've not tried it yet.
Been using qwen3.5:35b on openclaw for a week with a dgx spark. Pretty good all around, not stellar though.
No one is coming to save you. Nostrich since 768952.