I’m interested. Is there a GitHub available?
🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.
Edit
I’m interested. Is there a GitHub available?
After having spent some time testing this new solution, it seems to work great. I am still missing the updated scaling code in a few block types like tables and list items.
I hear a lot of developers complain about AI having sucked all the fun out of programming. I can understand that if what they enjoy is writing the code. But personally I feel the complete opposite. I have never enjoyed programming more than now. I’m so happy that I no longer need to be spending a ton of time on writing the code. Instead, I can use that time on figuring out the best architecture for the system I’m building. When I work with an LLM, I write Markdown documents that I use to prompt it with detailed descriptions of the features I want the LLM to implement. This could include which APIs I expect it to use. If it’s a rather unknown API, then I copy-paste the definition of it so it knows exactly which parameters it expects and which methods can be called. I find that if I don’t give the LLM some clear directions, then it will start using deprecated methods and start writing new helper methods even though it could just use existing ones. In general, I think LLMs can be a bit lazy at times… But they can do excellent work if given a good prompt. #AI #LLM #coding
I use the `ImageRenderer` to take snapshots of the canvas and it works well. But I do find it super annoying that it only works with SwiftUI. This is a problem as I use `NSViewRepresentable` from AppKit in many parts of the canvas. Especially for text-based blocks in note document objects, such as headings, lists, code blocks, and paragraphs. So I had to implement a separate case where if the block is rendered because a snapshot is about to be taken, then don’t bother with `NSTextView` and instead just render a `Text` view from SwiftUI. https://developer.apple.com/documentation/swiftui/imagerenderer
Here's a look at V1 of the grid view mode in fullscreen search: Most of my time has been working on a snapshot feature where the app takes a snapshot of the canvas. This will then be used in the fullscreen search to show a preview of the canvas file. I will most likely use the same approach for note documents instead of rendering the actual note document. It's less resource intensive to load an image compared to a note document with multiple blocks. Later, I will also use the snapshot feature in preview overlays that appear when hovering a note or canvas file. But before this, I think I should fix the UI for fullscreen search first. Just simple things like the list of search result should extend as much height as possible meaning all the way to the bottom of window. In the video, you can see the canvas being a bit laggy in the beginning when it's first loading. I do have some debouncing, but I think it might be a good idea to have a more sophisticated approach where only certain objects are loading at the same time. Perhaps only the ones inside the viewport, but if too many are showing, then have a maximum of how many can load at the same time. #dev #AppKit #Swift #SwiftUI #macOS
I should also use the snapshot approach for canvas preview overlays. These appear when hovering a canvas file in the ‘Files’ sidebar, or when hovering a tab that contains a canvas file.
I have been working on improving the fullscreen search functionality by implementing a preview mode. So now, there are two modes: _Grid_ and _Preview_. The preview mode is simply the grid but with a preview of the file’s contents. There will also by a third mode called _List_. I experienced some serious performance issues after implementing this new preview mode. This is really what most of my time was spent trying to fix. Before I was showing a real representation of the canvas where I rendered all the objects and scaled them to an appropriate size. Doing a full render of a canvas that could potentially contain 20 different objects (images, videos, note documents, etc.) is a bad idea. And it becomes an even worse idea if the search results include more than one canvas. I decided to figure out a different solution that would solve the performance issues. So I started experimenting… I figured that I could actually just show an image of the canvas, so I implemented functionality to take a snapshot of the canvas whenever closing a canvas file or switching to a different tab. This works really well and I think I will choose the same approach for the minimap inside canvas files. Because currently, I am doing a full render of the objects inside a minimap and there is just no reason to do that… Instead I will just use the snapshot approach. This means, I should also save a snapshot after making a change in the canvas, but of course with a debounce timer. For example, if moving an object, then take a new snapshot after a timer has expired. Will share a video demonstrating of the fullscreen search later…
Using `Text()` from #SwiftUI has turned out to be a bit of a problem - at least in the visual canvas. In SwiftUI, the easiest way to zoom in on a view is to use `.scaleEffect()`. This is exactly what I did in the canvas, so when you zoom in on a text object, I would increase the text size by simply using a `.scaleEffect()` modifier. However, this caused an annoying problem where the text would become pixelated, so this approach was just not viable. I had to find a different way... I tried another approach where I would dynamically scale the font size (multiplying the base font by the zoom scale). This did indeed solve the problem for normal text objects, so the text no longer became pixelated. But this approach introduced it’s own problem for note objects which also use `Text()` from SwiftUI. It caused layout jumping. If I understand it correctly, the rendering engine called `CoreText` does not scale fonts linearly. For example, a 14pt font scaled to 200% doesn't always take up exactly twice the space of a 7pt font. I noticed that small adjustments to letter-spacing would happen at every fractional font size. This meant that as the user zooms, a word that barely fit on line one suddenly becomes 0.1 pixels too wide and wraps to line two. This would cause the entire text block to "jump" vertically as lines snap back and forth between different wrap points. It felt broken and looked bad. The solution ended up being to use AppKit. Unlike SwiftUI’s `Text()`, the `NSTextView` from #AppKit allows us to manipulate the text to a much greater degree. So this did indeed fix the problem for note objects. BUT... It also introduced a problem since I use `ImageRenderer` to take a snapshot of the canvas, which is then used to display a preview of the canvas in search results. The thing is that `ImageRenderer` **ONLY** works with SwiftUI. It is simply incapable of capturing an AppKit view like `NSViewRepresentable`. It seems like my only choice is to render an `NSTextView` in the canvas, and then render SwiftUI `Text()` when the app needs to take a snapshot with the `ImageRenderer` API. #dev
Thanks I will check it out
I fixed a few syntax errors myself a few hours ago. Sometimes it’s still faster to do stuff yourself than prompting AI and waiting for its response. I find LLMs great at researching and discussing architecture (but they will tell you bs sometimes so don’t rely on them too much). And then I use them for implementing new big features.
Now that I have improved the media popup by implementing support for zoom-in and out with pinch gestures and an easier way to dismiss the media popup by either clicking the background or by swiping up and down, I think it's time to work on an easier to way switch between the different media attachments in a note document. I found this on Threads that I think looks quite good: Video credit: https://www.threads.com/@rnmp/post/DVeEjuSCmNx?xmt=AQF057FozccuzjLn5mwvLeC3EQH6UDwIhCVqwtNWOpJ-YQ Initially, I just wanted to show chevron buttons to switch to next or previous attachment, but I think showing the actual thumbnails looks much nicer. I'll see if I can get that working 💻 #dev #inspiration
Been working on a different way to dismiss the image popup in the notes app. Before the only option was to either click Escape key or click in the background behind the image which will also close the popup. Now the user can simply swipe up or down, or "zoom out" by making the pinch gesture. I really want the image in the popup to animate back to its original position in the note document. I do have a semi-working solution, but it's still experimental. I'm having trouble with the opening animation being laggy, but the closing animation looks fine for some reason. #dev #macOS #Swift #SwiftUI #AppKit
I love blur effects and because of this, one of the first things I tried when I started learning AppKit, was how to implement a beautiful blurry background to elements. This is something that's super easy in HTML and CSS, and while it's also quite easy in AppKit, you don't get anywhere near as much customization - something as basic as blur radius cannot be changed in the `NSVisualEffectView` API provided by Apple. I always found this super annoying so I began looking for hacks to workaround this problem. Unfortunately, I never got very far with it... AppKit is complex and a lot of it is poorly documented, so for a beginner, it's a big and hardcore task. And therefore, I just decided to live with the default `NSVisualEffect`... But just now, I found this repository: https://github.com/dominicstop/VisualEffectBlurView This repository seems to have found a workaround to the _custom blur radius_ issue. However, they work in UIKit and I work in AppKit, so it's not quite the same, but should still be very useful as I think they use `CAFilter` which I believe works in both AppKit and UIKit. I've never worked with `CAFilter` before, so should be interesting. Apparently it's a super old thing, and also, it's a private API so I don't expect to be able to find any official documentation about it... #dev
23 🇩🇰 Studying for a degree in Software Engineering while building fun projects and working freelance as a News Photographer 📷 I share my software projects, photos and videos from my work as a news photographer, and progress updates as I learn to sew garments. Basically, I just write about my hobbies. frederikhandberg.com