Won't sign up with Google or Microsoft, lol. Ever. And just a pro tip for getting other users that won't be bothered by that: after one types in the app idea it goes instantly to a sign up and feels weird. You should tease the user with some output to get them to want to sign up.
Recently I tried rork.app to generate mobile games for my kids. It is really amazing. They also give published URL which we can directly use in the mobile.
Is there any significant difference in code generation?
Sounds promising. What Lovable or Bolt is missing is ability to visually customize the output (like a no-code tool). Imagine if you could combine your AI agent + Flutterflow like No Code abilities? That would be magical.
Are Lovable, Bolt, or V0 good? I haven't tried them yet.
Are these tools going to replace application designers? How much work can they do, and how much remains to be done by engineers? Can they engineer complicated apps, or do they reproduce simple apps from a training set? (TODO list apps, etc.)
Is the code these systems output any good? Maintainable and extensible?
They are surprisingly capable given how long they’ve been around and they allow you to eject and fully manage your code via git. I’d say try them out and share your experience.
This is very cool. I've been using Project IDX and the Roo AI extension to work on Flutter apps in YOLO mode and its been pretty cool. The only issue is the model can't interact with the Android Emulator preview. Still trying to figure out a workaround to that, but if anyone has any ideas would love to hear!
I feel like these concerns are often overstated. Gemini was a well-known crypto exchange. Outside of their niche, I don't think Avid is something the average person is familiar with. Even the average software engineer.
Well I've used Loveable and now this and his description is right. It is Loveable for mobile apps. Something I've wanted since I found Loveable. It suits my needs and built what I needed. Just waiting for downloads to be implemented. Great work wcynthia.
What all these ai apps have is a certain look. it's like the bootstrap look from the early days. you know the app is being made by somebody who doesn't really care about quality.
I think there is going to be a counter reaction towards artisanal apps which will come out of all of this.
Just how WordPress runs 70% of the web but it's 70% of the crap web.
Similarly AI will maybe create 70% of the apps but it will be 70% of the crap apps.
And this is not some sort of reflection on AI. AI is actually great tech, but it's more about the person making the app and the proof of work required to show how much they care about the product. Or in this case these apps will be associated with shady fly-by-night companies trying to sell something.
As someone who is capable of programming, writing HTML, etc., but not of graphic design or UI/UX design, I can tell you that 100% of the reason the little things I wrote used bootstrap is because I really did care about quality, and bootstrap was the quickest and easiest way for me to get something "pretty good" straight out of the gate without having to spend ten times as long tweaking my design to get something a tenth as usable.
The problem is that the people who are lazy about it are obvious about it, so lazy sites that use bootstrap are obviously using bootstrap, and lazy sites that use wordpress are obviously using wordpress, but it's just confirmation bias.
It's just the Girls Suck at Math problem[0] all over again.
I'm going to have to completely disagree on Bootstrap. The use of Bootstrap isn't an indication on a lack of quality, it's an indication on a desire for a decent UI from the start. The fact that it was widely used isn't necessarily reflective of a given app.
Bootstrap, Material, etc. are all just established tools in terms of visual usability and consistency. Many people are far more concerned about having something functional over something that looks completely unique or different.
Personally, I tend to dislike most UI/UX experiments in terms of usability. Not all, but definitely most are just bad compared to what most people are used to.
They're not great at getting things like margins right and consistent across an entire app while they're trying to follow instructions for a complex design.
Similarly they understand contrast if prompted directly, but while they're implementing a complex design they'll tend to still end up making poor contrast choices with tons of default fonts everywhere.
-
If you iterate more and provide images back to the model, you can start to get something better, but that's tedious and the opposite of what most people using these tools are trying to do.
And V0 defaults to the absolutely awful ui/shadcn which is the worst possible idea as AI driven development becomes popular (let's create a UI library with minimal design tokens, no package name, no guarantees on consistency or versioning because you literally cut and paste it and update it by applying diffs.)
I'm personally excited to see if larger models with multimodal output will be able to generate detailed coherent UIs, that I can then implement using a copilot for tedious parts.
To me that's the ideal flow to get something that doesn't have the "V0 Look"
You also don't tell me this until I've already written a prompt, which is frustrating.
Recently I tried rork.app to generate mobile games for my kids. It is really amazing. They also give published URL which we can directly use in the mobile.
Is there any significant difference in code generation?
Are these tools going to replace application designers? How much work can they do, and how much remains to be done by engineers? Can they engineer complicated apps, or do they reproduce simple apps from a training set? (TODO list apps, etc.)
Is the code these systems output any good? Maintainable and extensible?
I’m not going to use a new technology just to find out what it looks like.
IMO your front page should be minimum 10 examples apps to download.
I think there is going to be a counter reaction towards artisanal apps which will come out of all of this.
Just how WordPress runs 70% of the web but it's 70% of the crap web.
Similarly AI will maybe create 70% of the apps but it will be 70% of the crap apps.
And this is not some sort of reflection on AI. AI is actually great tech, but it's more about the person making the app and the proof of work required to show how much they care about the product. Or in this case these apps will be associated with shady fly-by-night companies trying to sell something.
The problem is that the people who are lazy about it are obvious about it, so lazy sites that use bootstrap are obviously using bootstrap, and lazy sites that use wordpress are obviously using wordpress, but it's just confirmation bias.
It's just the Girls Suck at Math problem[0] all over again.
[0] https://xkcd.com/385/
Bootstrap, Material, etc. are all just established tools in terms of visual usability and consistency. Many people are far more concerned about having something functional over something that looks completely unique or different.
Personally, I tend to dislike most UI/UX experiments in terms of usability. Not all, but definitely most are just bad compared to what most people are used to.
It's because models struggle with design, period.
They're not great at getting things like margins right and consistent across an entire app while they're trying to follow instructions for a complex design.
Similarly they understand contrast if prompted directly, but while they're implementing a complex design they'll tend to still end up making poor contrast choices with tons of default fonts everywhere.
-
If you iterate more and provide images back to the model, you can start to get something better, but that's tedious and the opposite of what most people using these tools are trying to do.
And V0 defaults to the absolutely awful ui/shadcn which is the worst possible idea as AI driven development becomes popular (let's create a UI library with minimal design tokens, no package name, no guarantees on consistency or versioning because you literally cut and paste it and update it by applying diffs.)
I'm personally excited to see if larger models with multimodal output will be able to generate detailed coherent UIs, that I can then implement using a copilot for tedious parts.
To me that's the ideal flow to get something that doesn't have the "V0 Look"