phoenix.new thoughts/feedback

This is hard to pin down specificly, but there is something about how this is working that differs from what I have seen using other tools built around Claude in ways that is very odd and can be quite frusterating

some of it seems to be from the way it uses and seems to heavily depend on the web command that can end up in this very zoomed in and almost desperate cycles of tearing throug the code desperately fixing and changing things that are not needed - this is a thing that claude will do, but it feels worse

Another thing it seems to do more than I’ve seen elsewhere is seemingly unprompted, unexpected and quite frusterating completely rewriting of things adjacent to what it’s been told to do , more than half of what I’ve been doing with it is getting it to go and undo things that are not only not asked for but wrong and breaking

the other thing is that unlike claude code i can’t easily chime in while it’s working without hard stopping it mid work, and that combined with the fact that it has a tendancy to run and start doing all kinds of stuff without clarifying and sometimes even when i’ve asked it not to do anything means that it’s very hard to contain work - and very expensive to accomplish simple objectives because so much time has to be spent redoing things - I’ve never experienece so much chaos out of an AI coding assistant

I hope this isn’t coming across as a bunch of complaining, I really like the direction here and appreciate how complicated it must be to extend and add functionality on top of these things and keep it well behaved, I’m hoping this is helpful feedback - i’ve put a lot of time into running this thing through its paces and figured I should share what I’ve found! As it is right now i am finding it very expensive to use compared to similar approaches elsewhere mostly as a result of some of this

I am happy to provide more feedback if that’s helpful

random thought - you could have emoji thumbs up/ thumbs downs on the responses it has as its working that could send it feedback about what it’s doing ¯_(ツ)_/¯

so if it starts doing something you dont like it would be a softer way to interupt than a hard stop of the command - even if it was just a way to inject that ytou dont like what it’s up to or do before it moves on to the next thing, since there is currently no way to get in between that

Just a thought on feedback for AI dev tooling: is it worth noting in your post your level of software development experience? I should think the product owners would want to ensure they’re prioritising comments from their target audience.

25+ years and extensive experience in running dev and product teams, including Elixir teams. Unsure if I am the target audience, part of my motivation here is exploring the various AI based development approachs to get a feel for the different ways things are being implemented and what’s available

When using claude code I have been much more closely observing and approving things by looking at the changesets before commiting, but i have been experimenting here with not doing that (also the way it works here that seems not plausible since it just commits as it goes and thats part of the intent)

I also did just remove a bunch of stuff that it had added to the claude instructions code that may have been contributing to its behaviour , so we shall see if that improves things a bit - but there is definitly a bit of a over-eagerness it has and the way the chat works it’s hard to interject with it, even sometimes hitting stop doesnt really work (itll stop but immediately keep going so you have to hammer stop a lot to reign it in)

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.