Could we talk about vibe coding?

By Doll Eye, 10 February, 2026

Forum
macOS and Mac Apps

Hey!

This is probably going to be a highly technical thread which I'll regret spawning... But, as we have some very clever computer science types on here, I was wondering if you could give me and others some pointers on vibe coding?

Questions that come to mind:

1. Is it actually worth it?
2. What are the limitations in scope?
3. Is it indeed improving as the big three are telling us?
4. Where is it best to get started on the mac with voiceover, IE, an accessible route in?

I'm sure other questions will arise.

I did have a play a few months back with XCode and Claud and, though it worked well at first (an IOS app for playing my gbook library from iCloud whilst grabbing meta data from audible), the further down the line I got the more errors appeared. It was basically losing coherence with each pass, entropy spreads and all that. I tried creating a brief and put it in a folder and kept asking the AI to refer back, checking against the development goals, but it got confused, more errors floated in and, not being lingual in such things, I just let it get on with the debugging.

I know what coders will say if I ask the question, is there any value in us learning coding basics, but I'm also looking for the fastest start to finish of idea to app. I know that sounds lazy but... I really can't be bothered to end this sentence.

I think, what puts me off coding, and I have coded before during my degree in computer systems engineering, is the sheer weight of code, navigating it, syntax errors which are hidden from us, which results in me coming out in a cold sweat when faced with a wall of it.

I'm hoping you can give some pointers on this. Is XCode, in fact, the best way to do this on mac or are there better solutions?

Please talk to me like I'm an idiot... because... Well, I won't finish that sentence either.

Options

Comments

By Doll Eye on Tuesday, February 10, 2026 - 11:48

I did start reading this post a while back but gave up. it feels like he's shouting at me... I don't like being shouted at. It makes me sad.

Also, assuming it's all in caps.

I'll check out the podcast instead.

By João Santos on Tuesday, February 10, 2026 - 12:04

Nothing worth of real value has ever been produced with vibe coding, it's a total waste of resources using very powerful technology the wrong way just because C-suites want to be self-sufficient. All the AI junkies are collectively digging their career graves, not because AI is getting any better but because they are letting their skills rot away by relegating themselves to riding on the passenger's seat instead of driving innovation as pilots.

Large language models are interesting from a scientific perspective, but in terms of production they are actually contributing negatively to society. The same systems that are being used to power them at a loss and at scale could instead be employed to do a lot more interesting scientific research. At this pace we are more likely to end up in a reality where AI outsmart us not because it evolved to super intelligence but because we got a lot dumber collectively speaking.

By a king in the north on Tuesday, February 10, 2026 - 12:19

vibe coding is great if you just want to get started with an idea and want a proof of concept. However, the "move fast and break things" philosophy has made software far worse because shipping with bugs is now more tolerated than ever. Many accessibility bugs, for example, are produced by the large language model and it doesn't know how to implicitly fix them without human assistance, in my experience.

The limitations are pretty clear to see, as long as you don't buy into the hype. You have already noticed that as complexity increases, so does the amount of errors. Now, we've built over the last year or so very good scaffolding to keep this from happening. Tools like Claude Code, for example, which is not accessible BTW, have a lot of architecture to guide the large language model on what to do. This works, but it still depends a lot on the human using the tool. You will not get a good software project from zero-shot prompting. I don't think this will ever be possible in the future due to the nature of LLMs.

You have to be as explicit as possible with most of them, which is already hard for most humans to begin with. A large part of programming that doesn't get stated is that you often have to translate what the user actually wants into code, but what the user actually wants is not immediately clear by the language that they're using. That's where the human capacity has to come in.

The only frontier where they seem to actually be improving is the mechanics of writing code, which is admirable but won't replace anybody unless they're doing easy tasks and maintenance. The reason people struggle with coding is that they get buried into the mechanics of the thing instead of attempting to understand it from first principles before any coding is actually done. But if all you can do is to measure yourself by the number of errors that you have in your code, which is natural if you're starting, you'll get very frustraded that way. The brain can't keep track of context right away. Rather, it has to adapt over time. LLMs tend to context drift, which is a hard problem to solve. That's why they'll lose track of whatever they're doing unless they are grounded by tricks, either by explicitly reminding them over and over or external memory, which is not always going to work. So really we're trying to hack around their limitations. I like treating them as fancy simulations to bounce ideas off of.

Hope this helps.

By mr grieves on Tuesday, February 10, 2026 - 12:58

Sometime last year, Atlassian broke the UI for a feature I depend on for my work in BitBucket so it was virtually impossible to use with VoiceOver. Well, I have been struggling with it for about 8 months or maybe more and it still doesn't work.

A few weeks ago, I was listening to the developer of Blind RSS on Double Tap talking about Vibe Coding, and it occurred to me that I now have access to a ChatGPT Business Plan.

So I installed "codex" which is the ChatGpt commandline. Connecting to my account was a bit confusing as VoiceOver struggled a little to tell me what my options are, but I got there.

At this point I should tell you I write Python for a living. However, I have never written Swift (the language you use for iOS or the Mac0, and I have never developed a MacOS UI.

I told Codex what I want - a native MacOs app, connecting with BitBucket Cloud, that could help me load up a pull request and view comments. Over the course of 3 days and maybe about 8 hours in total, I had something working that was genuinely useful. I have been refining it a lot since then and adding a load of features, although I still can't do everything I need yet.

But without touching a line of code I have something working and 100% tailored to my use case.

In this scenario all I needed before hand was xcode (free download for MacOs/iOS dev), nodeJs (free install) and codex (free download/install). I needed a paid subscription I believe. There is now a MacOs app for codex which seems accessible from a first play. However whether it stays wthat way is anyone's guess.

There is absolutely no way I would have developed this on my own. Much as I would like to have the time and energy to learn Swift and figure all this out on my own, if I had gone down that path I would never have anything working and the idea would have just faded from view over time.

I did get a little too ambitious to start with. I think keeping it small and iterating probably works better than trying to describe everything in one go. The first version of the app, for example, just had hardcoded values everywhere. I was particularly impressed with the way it created a diff view so I could see what has changed in a file. I described one way to do it which I think it largely ignored and what I got was actually pretty fantastic. Simple but understandable. The best thing I have used natively on a Mac with VoiceOver even if it is lacking in features right now.

There are things it struggles with. For example, I wanted it to build a tree for the files but it seemed to struggle. Maybe it was my prompts. Sometimes it has failed to do something - e.g. I wanted to use APIs to get some extra details for things and it just couldn't do it - it tried and then rewrote and rewrote again and again but every time it failed.

The other thing - I keep telling it to make sure it works with VoiceOver. It generates an agent.md file which is a markdown with instructions about how it should work so I added it there. But I can point out bugs where VO navigation isn't working and it will usually fix them even if it can take a number of goes.

One of the best things I asked it to do was add an HTTP Log so I could see what was going on. That helped me notice some crazy things - e.g. it was going through all pages of an API so calling it loads of times, and then I wasn't even using that data. So I was able to direct me.

I've started to try to pay more attention to what it is doing. It is sometimes adamant that something that doesn't work is correct but it can do a work-around for the instances when it doesn't work, as opposed to just using the right thing. I need to be careful about relying on it too much without paying any attention at all.

I think there is a definite danger to developing an app without actually understanding anything that is going on. The first time it broke the build and I had to figure out how to find the error to paste into it was unnerving. But I have no idea what the code is doing. I have no idea what the UI looks like or how many hacks have gone in to get things to work. Honestly, I don't care because the app is already incredibly useful and will hopefully continue to be.

It would be a little different if I decided to release this into the wild. For example, I am pretty sure the keys I used for the API are not in the code but I haven't checked. And I can't say for sure it's not doing anything it shouldn't.
Obviously it would be better if Atlassian didn't just break everything all the time and then not fix it. I shouldn't have to do this. But I am really grateful for the option.

In terms of a process, I find it both enjoyable and frustrating. It's amazing to say "can you just add a list of such and such containing this..." and then a minute later there it is. That is pretty mind blowing. On the other hand, you do need some patience when you endlessly iterate over a problem and it continues to be incapable of fixing it. I have wasted a lot of time trying and failing to have it do something that feels trivial.

Also sometimes you wonder if it really knows what it is doing. It does occasionally feel quite trial and error.

Oh before I forget, one essential thing you should be familiar with is git, the version control system. Every time codex manages to do something and it seems to work I tell it to commit the changes. You don't have to know the syntax or do it yourself, just be aware that you should do this a lot. The reason being is that if it makes a total mess of things you can always say "revert all changes!" and get back to where you were.

By mr grieves on Tuesday, February 10, 2026 - 13:13

OK the last comment was a bit long, but I thought I would follow-up with how I am using this in my professional life.

Firstly, I think it is a bad idea to write code you do not understand in a professional basis. Let's face it, over the years you do have abstractions so many developers don't understand absolutely everything anyway. E.g. if you are writing a Mac app, you might drag controls onto a design surface but you don't really know how they are getting rendered or you will probably use a library to access a database. But I think it is a big step from there to understanding nothing about your own work.

we have setup codex as part of code review. I am a little uncomfortable sending all our software up to ChatGPT to look at. This doesn't feel sensible but it's not my call. The results, however are actually pretty amazing. It can really understand a code base, and even look at the ticket to try to understand why a change is being made. It often comes up with incredibly insightful comments. It's not always right but it is always worth considering what it says. Compared to human reviews I get which are almost always pretty useless other than pointing out some cosmetic errors.

I will use codex to vibe code a proof of concept, particularly if I am not given the time to do it properly myself. I have only done this once and my intention is to rewrite or refactor so I can make sense of it all as I don't really understand what it is doing yet. Which is OK for a quick POC but not good long-term.

And you can always use it for little bits. For example, I asked it to add a few things to an existing template when I didn't know the syntax. I had a look to see what it had done afterwards and it just saves time when compared to going to google or normal ChatGPT.

I don't really want to use vibe coding for anything I have to support because it is essential I know what is going on. I don't think I would feel comfortable going all-in with vibe coding professionally, but I think there are related tools that are useful and it certainly does have its uses.

I do have a lot of concerns about vibe coding and AI in general. Much as I like some of the tools it gives me, I think the bigger picture is pretty terrifying and I hate to think of the implications it has on society and the environment. It worries me a lot that things are becoming effortless to the point where no one is going to really value anything any more. I I think you get more value when there is effort and graft, blood, sweat and tears. And less when you just say "I want it now!" and there it is and now it's boring and onto the next thing.

But as a blind man I appreciate all the help I can get.

By Brian on Tuesday, February 10, 2026 - 13:32

I am of two minds when it comes to VibeCoding. On one hand, I absolutely agree with @João Santos; that fully relying on AI to write code and build programs of any significance, is just going to lead to disaster. As someone who graduated with a degree in computer science, I have an understanding of what it takes to learn code, to learn syntax, and to understand the differences between variables and operators, 'if statements' and boolean states, arguments and definitions, just to name a few things.

On the other hand, I don't think there's necessarily anything wrong with using AI to doublecheck a line or two of code that you are perhaps having trouble with. Say for example if there is a line, or even a block of code that is giving you trouble, perhaps it is producing an error and you can't quite figure out how to go about correcting it, or other such situations. In these instances, I don't see how utilize an AI is necessarily a bad thing.