spacestr

đź”” This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
nanogpt
Member since: 2025-07-26
nanogpt
nanogpt 2h

Importing ChatGPT and Claude conversations is now live. If you've been postponing moving to NanoGPT because of your extensive ChatGPT/Claude history - export it, import it, and increase your privacy and flexibility. 2-step instructions below! ChatGPT: See also https://nano-gpt.com/chatgpt. Fast version: - Go to ChatGPT Settings — Data Controls (https://chatgpt.com/#settings/DataControls) and click "Export data". - Import into your NanoGPT conversations (https://nano-gpt.com/conversations). That's all! All chats and images conserved. Claude: See also https://nano-gpt.com/claude. Fast version: - Go to Claude Settings (https://claude.ai/settings/data-privacy-controls) and click "export data" under data controls - Import into your NanoGPT conversations (https://nano-gpt.com/conversations). That's all! All chats conserved. To be clear, in both cases we do not see your chats and conversations. The importing happens locally, an your chats and conversations (and images) are stored locally. The default model for imported conversations is set to ChatGPT or Claude 4 Sonnet. One of the best parts of NanoGPT though - you can change this to any model you wish, at any time. Plus - no-log, and no need to reveal your identity to us. Your privacy matters.

nanogpt
nanogpt 13d

Context memory now live! TL:DR: we've added context memory which gives infinite memory/context size to any model and improves recall, speed, and performance. This is an amazing feature for programming and agentic purposes, and can be used with any model. When conversations get long, models become slow, lose track, or error out entirely. Context Memory keeps conversations and coding sessions snappy and allows them to continue indefinitely while maintaining full awareness of the entire conversation history. The Problem Current memory solutions like ChatGPT's memory store general facts but miss something critical: the ability to recall specific events at the right level of detail. This means: - Important details forgotten or lost during summarization - Conversations cut short when context limits are reached - AI agents that lose track of their previous work How Context Memory Works Context Memory creates a hierarchical structure of your conversation: - High-level summaries for overall context - Mid-level details for important relationships - Specific details when relevant to recent messages Here's an example from a coding session: Token estimation function refactoring |-- Initial user request |-- Refactoring to support integer inputs |-- Error: "exceeds the character limit" | +-- Fixed by changing test params from strings to integers +-- Variable name refactoring When you ask "What errors did we encounter?", Context Memory expands the relevant section while keeping other parts collapsed. The model that you're using (like ChatGPT or Claude) gets the exact detail needed without information overload. Benefits Developers: - Long coding sessions without losing context - Speed! Compresses long histories so your model responds quickly - AI agents that learn from past mistakes - Documentation that maintains context across entire codebases Agents: - Long-running agents keep everything in memory, from the very first step to the current status - Use any model for agents with effectively infinite memory — Context Memory stores all history and passes only the relevant bits so the model always stays aware - Reliable planning and backtracking with preserved goals, constraints, decisions, and outcomes - Tool use and multi-step workflows stay coherent across hours, days or weeks, including retries and branches - Resume after interruptions with full state awareness, without hitting context window limits Roleplay: - Build far bigger worlds with persistent lore, timelines, and locations that never get forgotten - Characters remember identities, relationships, and evolving backstories across long arcs - Branching plots stay coherent—past choices, clues, and foreshadowing remain available - Resume sessions after days or weeks with full awareness of what happened at the very start - Epic-length narratives without context limits—only the relevant pieces are passed to the model Conversations: - Extended discussions without forgetting details - Research projects that build knowledge over time - Complex problem-solving with full history awareness Turn on Context Memory as early as possible in your conversation! Context Memory progressively indexes your conversation each time, building up a comprehensive understanding. This means: - Starting memory early captures your entire conversation history - You can go over 1 million tokens without hitting limits - The system compresses intelligently, enabling it to return the most relevant information - Later messages benefit from the full context built up over time The earlier you enable it, the more complete your memory will be. Using Context Memory Simple. Add :memory to any model name. Or pass a header: memory: true. Or on our frontend, just check "enable context memory". Retention By default, Context Memory retains your compressed chat state for 30 days. Retention is rolling and based on the conversation’s last update: each new message resets the timer, and the thread expires N days after its last activity. You can configure retention from 1 to 365 day. How It Works - You send your full conversation history to our API - Context Memory compresses this into a compact representation with all relevant information - Only the compressed version is sent to the AI model (OpenAI, Anthropic, etc.) - The model receives all the context it needs without hitting token limits This means you can have conversations with millions of tokens of history, but the AI model only sees the intelligently compressed version that fits within its context window. Provider: Polychat When using Context Memory, your conversation data is processed by Polychat's API which uses Google/Gemini in the background with maximum privacy settings. You can review Polychat's full privacy policy at https://polychat.co/legal/privacy. Important privacy details: - Context Memory over the API does not send data to Google Analytics or use cookies - Only your conversation messages are sent to Polychat for compression - No email, IP address, or other metadata is shared, only the prompts Pricing - Not-cached input: $5.00 per million tokens - Cached input: $2.50 per million tokens - Output Generation: $10.00 per million tokens Retention: 30 days by default; configurable 1–365 days via :memory- or memory_expiration_days header Typical Usage: 8k-20k tokens per session That's all! Go try it out, and let us know what you think.

nanogpt
nanogpt 19d

Qwen Image is now available! This 20B parameter image generation model from Alibaba's Qwen series achieves breakthrough performance in complex text rendering - supporting multi-line layouts, paragraph-level semantics, and bilingual content (English & Chinese) with stunning accuracy. Whether you need movie posters, presentation slides, storefront scenes, or stylized infographics, Qwen Image seamlessly integrates crisp, readable text into the visual fabric. Beyond text, it excels at general image generation across diverse artistic styles from photorealistic to anime. Currently ranked third overall on the AI Arena leaderboard and the top open-source model based on 10,000+ human comparisons!

nanogpt
nanogpt 24d

July payment statistics time! 🥳 We added a few coins this month, so the statistics are getting ever broader and ever more representative. Let's dive in! 👇 We have to start with Monero. It's eating up everything else in our pie chart. XMR now sees 3x as much usage as Nano, and is used more than all other coins combined at 52.91%. Genuine props to the Monero community - offer a privacy solution and they will come! Second biggest is once again Nano. Our love, our initial coin, and a coin that is punching way above its weight. Ranked #400 in market cap, yet for months it's been the most used or 2nd most used coin on our platform at 17.71%. Hopefully on BTCPayServer soon as well! 3rd biggest has been retaken by Bitcoin! 10.2% of payments were using Bitcoin, in addition to another 0.95% using the Lightning network. Average BTC transaction size was $24.48, while Lightning's was $4.42. As expected, but still fun to confirm. Digital silver Litecoin plus the recently added Litecoin MWEB added up to 6.7%. 6.59% Litecoin, 0.11% specifically MWEB. To be fair - we only added MWEB payments about 7 days ago! 3.57% of payments volume was done using ETH + L2s on Ethereum. This includes Base (the most used L2), Arbitrum, Optimism, as well as all coins on all these chains. Finally with a remarkable amount of usage, Zcash with 3.44%. All the more remarkable given that ZEC payments were only added 2 weeks ago, and the much-requested shielded pay-in addresses a few days ago. Here's the complete breakdown for all coins: XMR: 52.91% XNO: 17.71% BTC: 10.20% LTC: 6.59% ETH: 3.57% ZEC: 3.44% VERSE: 2.35% SOL: 1.08% BTC-LN: 0.95% DOGE: 0.32% BCH: 0.32% DASH: 0.24% BAN: 0.12% LTC-MWEB: 0.11% KAS: 0.05% EGLD: 0.01% POL: 0.01% Unfortunately as you can see Kaspa was barely used, with just 0.05% of usage. We know there are a lot of Kaspa enthusiasts and presumably users as well - we'd love to get in touch with some Kaspa people to hear how we can let Kaspians know we exist! Another addition this month that didn't pan out (so far) was MultiversX, with just 0.01% of total usage. We know that there is a large community and that there is a lot of usage, so if anyone in MultiversX can get us on a podcast to explain NanoGPT, we're all ears! In a similar vein the typical payment coins like Bitcoin Cash and Dash do not see the amount of usage on NanoGPT that you would expect. We clearly need to up our outreach there! That's all for this month! As always we hope that merchants that are considering accepting crypto can use this as a guide to what crypto to prioritize. We're always happy to chat if any of you want to know more about NanoGPT or how we accept all these coins!

#400
nanogpt
nanogpt 25d

Awesome! We're very appreciative of everything you all do - we use BTCPayServer and love it.

nanogpt
nanogpt 26d

Two new Wan 2.2 video models are now available! Wan 2.2 Turbo (https://nano-gpt.com/media?mode=video&model=wan-video-22-turbo) and Wan 2.2 5b are faster and more affordable versions of the Wan 2.2 14b model. Both support text-to-video and image-to-video generation. Wan 2.2 Turbo starts at just $0.05 per video for 480p!

nanogpt
nanogpt 26d

New model: Qwen3 30B A3B 2507 is now available! https://nano-gpt.com/conversation?model=qwen3-30b-a3b-instruct-2507 This 30.5B-parameter mixture-of-experts language model from Qwen features 3.3B active parameters per inference, offering excellent efficiency. Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage compared to its predecessor.

nanogpt
nanogpt 29d

Reminder that Qwen 3 Coder is an amazing model for development - competitive with Claude 4 in many benchmarks, and is about 30 times cheaper than Claude Sonnet when you use it through NanoGPT!

Welcome to nanogpt spacestr profile!

About Me

Access every AI model, privately, using Bitcoin and Lightning.

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends