spacestr

πŸ”” This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
0x80085
Member since: 2024-04-03
0x80085
0x80085 26d

πŸ”₯🚨 "GIVE US YOUR SOUL" BROWSER V8.8 NOW LIVE! 🚨πŸ”₯ πŸš€ MAX DATA HARVESTING – Every keystroke, every tab, every weird midnight search BELONGS TO US. πŸ’° "OPT-OUT" IS A PRANK – You CAN'T. We OWN your clicks. PAY US IN PRIVACY. πŸ” 50,000 TRACKERS PER SECOND – Your cat pics fuel our AI overlord’s DARKEST DREAMS. FEATURING: βœ… Mandatory biometric scans (lol we see you) βœ… "Incognito" mode that just emails us your history βœ… Legal immunity (we wrote the laws, nerd) πŸ’€ UPGRADE NOW OR WE’LL SELL YOUR SEARCH HISTORY TO YOUR MOM.

0x80085
0x80085 27d

You're welcome and good luck, happy coding! 🫑

0x80085
0x80085 27d

Cursor is free for first 10 days or so, think that may be easiest for new coders. And trial just ends after/no CC required yet. Could probably even cycle new accounts if a trial runs out. Also it's a full blown IDE, so it codes and runs the code for you. Quickest option imo. Free Deepseek is rather good for coding and allows more free prompts than GPT. Free GPT is good, but less prompts/defaults to crappier model once free mode runs out. Both do reset the limit after 24h or so.

0x80085
0x80085 27d

Come to think of it, I bet theres already github repos which do this exact thing to archive YT videos. Maybe look for that before trying to write your own.

0x80085
0x80085 27d

Oh! And you can use youtube-dlp for the download of YT videos https://github.com/iqbal-rashed/ytdlp-nodejs

0x80085
0x80085 27d

You could ask ChatGPT or Deepseek or use Cursor if you're not a dev. This task is simple enough for AI to be able to solve it. High level it's something along the lines of: Make a nodejs (or your preferred programming language) script which takes the RSS link of your YouTube channel and queries it for the most recent events. YT RSS doesnt require authentication, so you don't need to store any tokens for that. The script should keep track of already downloaded videos to avoid duplicates. Then it will have to upload to a mirror somewhere tho, that will require authentication. You could host it locally on a PC that's always online. Even at you own home. And ofc make sure it runs constantly (use pm2 for node scripts for example) I asked an AI for a very simple script which probably is lacking features but can be used as an example/starting point (lacks mirror upload tho): const Parser = require('rss-parser'); const ytdl = require('ytdl-core'); const fs = require('fs'); const parser = new Parser(); const feedUrl = 'YOUTUBE_CHANNEL_RSS_FEED_URL'; const downloadedVideos = new Set(loadDownloadedVideos()); // Load from file/db async function checkAndDownloadVideos() { while (true) { try { const feed = await parser.parseURL(feedUrl); for (const item of feed.items) { const videoId = extractVideoId(item.link); if (!downloadedVideos.has(videoId)) { await downloadVideo(item.link, videoId); downloadedVideos.add(videoId); saveDownloadedVideos(downloadedVideos); // Save to file/db } } await new Promise(resolve => setTimeout(resolve, 3600000)); // Wait 1 hour } catch (error) { console.error('Error:', error); await new Promise(resolve => setTimeout(resolve, 60000)); // Wait 1 min on error } } } async function downloadVideo(url, videoId) { const stream = ytdl(url, { quality: 'highestvideo' }); stream.pipe(fs.createWriteStream(`videos/${videoId}.mp4`)); await new Promise((resolve, reject) => { stream.on('end', resolve); stream.on('error', reject); }); } checkAndDownloadVideos();

0x80085
0x80085 27d

It's nothing special, the real legends are the Nitter devs and instance hosters (neither are me lol)

0x80085
0x80085 27d

The Elon Link Reaper bot? I created it, it just reformats twitter links into Nitter instance ones, it doesn't cross post at all FYI Glad you like it! The way Nitter works is it scrapes Twitter via a non-public API (and it's against X TOS). They use a large pool of twitter account tokens to be able to pull this off bc they can't rely on a single user account. That would be too obvious scrapebot behavior. So they disperse the scraping among multiple probably hundreds of accounts to make a single Nitter instance work. Check the Nitter github repo for more info. https://github.com/zedeus/nitter As for the YT mirroring, you'd have to probably set up a RSS fetching loop script which downloads vieeos from your channel once you upload them, then reuploads those videos to your desired mirror site/nostr relay/bloom/peertube/your own website/etc etc (Idk much about bloom stuff/media hosting here yet tho..) Bot npub:

0x80085
0x80085 27d

Ah I see πŸ€” I'll have to read up more about relays!

0x80085
0x80085 27d

Hmm interesting, maybe it won't be an issue in that case Thanks for explaining! πŸ™

0x80085
0x80085 29d

This makes relays strateful, even too stateful. All relays would need to save the (ref to) bloom filter of every recently subscribed user, How does this scale? Clients should own this, not relays imo

0x80085
0x80085 29d

Your mom Also 12h later - doesn't elaborate Who, why and why not zapstore. Also screw apple

0x80085
0x80085 29d

*coughs * make it opt-in *cough* imo user tracking functionality should always be optional and user controlled.

Welcome to 0x80085 spacestr profile!

About Me

Washed up dev

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends