Large Study Finds That Replacing Workers With AI Is Backfiring Badly “A new study from Gartner immediately caught our eye. As Fortune reports, the research and advisory firm surveyed 350 global business executives whose companies are pulling in at least $1 billion annually to investigate whether all these AI layoffs are paying off in the real world.” The bottom line right now seems to be that replacing workers with AI gives no real benefit (let's not go into those businesses that lost data by letting AI actually manage their data), and in fact a business loses a lot of experienced workers. Where AI has actually been beneficial from a business perspective, is where it was used by workers themselves to enhance their work capability. See https://futurism.com/artificial-intelligence/layoffs-ai-automation-backfire #technology #AI #workplace
With KOReader My ancient Kobo eReader is now better than a new Kindle The linked article inspired me to install KOReader on my old Kobo eReader. It was just a question of copying two files to the Kobo's root directory, and running the install.sh script there. I then also ran an update for KOReader itself once it had restarted. It was as simple as that, and the Kobo software is still on the device. I can even exit KOReader and just go back to the Kobo software as normal if I wish. It is true that the KOReader does feel very responsive, and seems better than the Kobo software. What I liked was the mass of customisation options around not only appearance and behaviour, but also all the plugins. Most surprising was the wealth of reading statistics shown from calendar view, to activity, to time per book, and a lot more. If you have a Kobo this is really well worth installing. And yes, it does beat the socks off the software that comes on any Kindle! See https://www.howtogeek.com/my-ancient-kobo-ereader-is-now-better-than-a-new-kindle #technology #ereader #reading #Kobo
South African universities developing their own ChatGPTs that better understand local languages “From Cape Town to the Free State, local educational institutions recently announced work on language-focused AI models that better understand local languages compared to international giants like ChatGPT. On Monday, researchers from the University of Cape Town (UCT) announced that they had developed a new AI language model specifically trained on South Africa’s 11 official written languages.” I'm really glad to see this for two reasons: We should not just be blindly adopting a dependence on foreign owned AI tools, and secondly, we need to develop our own LLMs that can better understand the local cultures and nuances of our country. This is a practice that should be more widely adopted throughout local industry, and not just for LLMs. It enhances the innovation we have and encourages local skill building. Of course, it will also bring a lot of meaning and involve non-English language groups too. See https://mybroadband.co.za/news/ai/645787-south-african-universities-developing-their-own-chatgpts.html #technology #AI #SouthAfrica
I cut $800 in subscriptions by running free tools on hardware I already owned “Subscription costs have a way of feeling invisible. You might have a cloud storage here, an AI tool there, a transcription service you barely use anymore. All of this can add up to something substantial. But if you own a mid-range GPU, there’s a good chance you’re paying for things your hardware could handle for free. I have an NVIDIA RTX 3060 (currently around $250), and last year it saved me $819.96 by allowing me to cut or downgrade four subscriptions. It wasn't by doing anything exotic, but by running free, open-source tools locally that do the same job.” I started to realise this myself, just this week, as I was experimenting with AI tools and AI image generation. I made the mistake when I upgraded my video card a few months ago, from a 6 GB VRAM card, to a 12 GB VRAM card. This was because I had a game that really wanted about 8 GB of VRAM and I reckoned that the 12 GB would give it a bit of headroom. DaVinci Resolve Studio also wanted 8 GB of VRAM for its new AI functions. Yep, I know the prices get expensive as you go higher up, but I was thinking in a gaming mode, and not what else I could use that card for. Thinking now with this other mindset I realise I should have pushed higher on my new card. Still that said, you can work efficiently with a 12 GB card, or even a bit smaller, if you don't run too many GPU intensive apps together, and you can get away with smaller more efficient AI models too. See https://www.howtogeek.com/ways-my-old-nvidia-gpu-saved-me-thousands-of-dollars #technology #opensource #subscriptions #GPU
My First Meanderings With My Own Locally Hosted AI Tools This is by no means any form of detailed tutorial, but I did pick up a few things about locally hosted AI while I’ve been testing the last few days, so I thought I’d share some of what I’ve picked up. I did upgrade my GPU card recently to 12 GB VRAM, mainly to better accommodate a game I sometimes play and to run AI functions on DaVinci Resolve Studio, which was struggling on my previous 6 GB VRAM card. But I realised my GPU is mostly idle, and seeing my home server has no decent GPU in it, I could maybe put my GPU to some better and more useful purpose. I also realised that many AI models could fit in even 8 GB or less of VRAM. I had recently upped my Google One subscription to the AI Plus option, and wanted to see how locally hosted AI compares with that. Not only that, but I’ve been testing out that Gemini option to see if it will replace my paid Canva subscription service. TL;DR for all of this is the locally hosted AI is not quite up to the standard of Gemini AI Plus. Probably a 24 GB VRAM card would do a lot better as you can fit way bigger AI Models into it and even run more than one AI app at a time, but those cards are super pricey. As someone said recently, you either pay for the AI subscription service, or you pay for your own hardware! The full post is too long to put here, so best to read at my blog as there are various photos also inserted into the text. See https://gadgeteer.co.za/my-first-meanderings-with-my-own-locally-hosted-ai-tools #technology #selfhosting #AI
Google quietly fixed AirPods compatibility with Android, and the LibrePod app is all you need “It’s long been a poorly kept secret that AirPods technically work with Android phones. However, certain features have always been broken due to Apple’s disregard for Bluetooth standards. That’s finally been fixed, and an app called LibrePods unlocks the possibilities.” The actual fault though does lie with Apple as they just do not follow the Bluetooth standards. Did I ever mention I have a real hatred for Big Tech companies which try to lock out any competition? The same goes for Facebook taking open communications to get users on board, and then shutting down that access. But I digress. LibrePod is now in the Play Store and good news is it does not require root. The article also notes that all functions are available for free if you download the app from GitHub directly. See https://www.howtogeek.com/google-quietly-fixed-airpods-compatibility-with-android-and-this-app-is-all-you-need or https://github.com/kavishdevar/librepods #technology #opensource #airpods
Stop using Cloudflare's default 1.1.1.1 DNS (changing one digit blocks malware at the router level) “Cloudflare's 1.1.1.1 DNS server is popular for its speed, reliability, and support DNS over HTTPs (DOH), which gives you some added privacy. However, 1.1.1.1 doesn't do much besides lookup IP addresses for you. If you want something that offers additional security, you should try 1.1.1.2 instead.” Many folks do use Cloudflare for their DNS and using 1.1.1.1 is the usual default (it is often better than using your own ISPs DNS) but if you do, you can notch up your security a bit by rather using 1.1.1.2. It could help prevent you from visiting a phishing link or being infected by some malware. It is a small quick change that you can make, and typically is just one small change on the home router. See https://www.howtogeek.com/everyone-uses-1111-but-1112-protects-you #technology #malware #security #DNS
Stop Guessing if Your Servers are Up: Free VPS Monitoring Guide In this video, I explore how to set up Uptime Kuma on a Free VPS using Docker Compose to create a robust, 24/7 monitoring solution for your home lab or AI applications. I dive into the practical logic of utilising free-tier cloud resources to ensure your services stay online without adding to your monthly overhead. The discussion covers the end-to-end deployment of a modern monitoring stack. I walk through my docker-compose.yaml configuration, which integrates Uptime Kuma for service availability alerts, Caddy for automatic SSL encryption, and the Beszel Agent for lightweight host metrics. Practical Tips: Best practices for managing your monitoring environment via the terminal and keeping your containers up to date. Whether you're looking to monitor a complex AI workflow or just want a reliable status page for your personal projects, this setup provides a professional-grade solution at zero cost, and that includes running from a free VPS server. Watch https://www.youtube.com/watch?v=HJIvMfPa19M #technology #opensource #selfhosting #freeVPS #uptimerobot
Self-hosted Pinchflat will download YouTube videos or playlists into Jellyfin “Pinchflat can technically download individual videos, but its strength lies in archiving entire channels and playlists at a time. For starters, Pinchflat lets me add channel and playlist URLs as sources, and I can modify the indexing frequency, download cutoff days, and retention settings. On conventional YouTube downloaders, I’d have to pull new videos manually every so often, while Pinchflat can archive fresh uploads after detecting them with its indexing algorithms.” Personally I have been using TubeArchivist for this, but there are some key differences between the two: * Pinchflat acts like a “Sonarr for YouTube.” Once you set up a source (channel or playlist), it downloads the videos, names them according to your template, and saves the NFO and poster images. It uses low resources. * You install the TubeArchivist Jellyfin Plugin. This plugin talks to the TubeArchivist API to pull in metadata, thumbnails, and watches progress. TubeArchivist has its own dedicated web UI to browse your YouTube collection and is a bit heavier on resources (Requires Redis, Elasticsearch, and the App container). See https://www.xda-developers.com/this-app-turned-my-jellyfin-server-into-a-youtube-archive or https://github.com/kieraneglin/pinchflat #technology #opensource #Jellyfin
No yet, my Oasis reader is still supported. This is probably really useful for the older Kindles that have just been cut off from Amazon.
Welcome to Danie spacestr profile!
About Me
Testing out new the noStrudel web client
Interests
- No interests listed.