Hacker Newsnew | past | comments | ask | show | jobs | submit | drochetti's commentslogin

The link is on the GitHub repo.

As for the feedback, fair enough. As I said, this is just a demo and it's in its early stages. It has no goal of replacing professional video editors or even matching their features. Some annoying bugs and missing features, we will tackle them.

As for your last comment, feel free to judge, but I can say it was not quick at all to do this just to post on HN. You can check all the commit history and the codebase so you can have an idea how much care was put into it.


It is quite clear by the result that you've spent all of your time/effort building some front end to genAI vs making a video editor. As of now, I see very little that looks like a video editor. There is no basic transport controls, there's no obvious way of trimming clips, there's nothing that says "you can edit videos" in this.

This is a very premature demo that does more harm than good in promoting your product. If you said we built a tool that makes it easy to generate content, then you'd have a much more interesting product. Tacking on claims of this being a video editor is extremely disappointing, as the most basic of abilities of a video editor are missing. This is what brings on the criticism of being a rushed product launch just for the PR postings.


Thanks! Not as a component, but you can clone or copy it and modify as needed.

It's a quite complex UI, so it's not easy to export it as a single component.


That's fair. But I know a lot of smart folks out there that have trouble building that "thin layer of UI". So if that helps them, mission accomplished.

Anyone can replace the AI layer with their own local models, other services... whatever suits your use case and preferences is fair game.


Absolutely. The idea of being open source with a permissive license is that we're encouraging anyone to do whatever fits their use case.

You can replace anything, deploy on your own server, port it to other stacks... whatever brings value to you.

We're also open to PRs, cut an issue in the repo and we can get the conversation going.


Drag and drop the media to the timeline, and drag the media along the timeline track is already supported.

We will keep improving the UI, including shortcuts. Thanks a lot for the feedback.


Thanks for the feedback, eta would be great indeed. I'll look into it.


There's no hidden magic in the playground and in the demo app, we use the same API available for all customers and also the same JS client and best practices available in our docs.

To all your questions, I recommend playing with it in the API playground, you'll be able to test different image sizes, parameters, and have an idea of the cost per inference.

If you have any other questions, say hello on our Discord and I'm happy to help you.

https://fal.ai/models/stable-diffusion-xl-lightning


We just added share. Let me know what you come up with!



love it! I have to log off but I should let you know it seems like the generation is different depending on whether you arrow up or arrow down into the seed when the focus is on the seed input (i.e. going up from 5 to 6 will have a different result than going from 7 to 6)


What I see here is a bunch of smart tech-savvy people that never took a #lookoftheday pic and posted on Instagram and will never gonna buy the product anyway talking about privacy. How do you know you are completely safe with your phone, with your laptop? Fact is that there are millions of people posting "look of the day" on Instagram, Snapchat every day. Taking selfies in front of the mirror in their bedrooms and they just don't care. I bet they care about a product that would make that easier and afaic Echo Look is promising to deliver that. So I'm curious to see how the actual target will respond to the product.


What I see here is a bunch of smart tech-savvy people that never took a #lookoftheday pic and posted on Instagram and will never gonna buy the product anyway talking about privacy. How do you know you are completely safe with your phone, with your laptop? Fact is that there are millions of people posting "look of the day" on Instagram, Snapchat every day. Taking selfies in front of the mirror in their bedroom and they just don't care. I bet they care about a product that would make that easier and afaic Echo Look is promising to deliver that. So I'm curious to see how the actual target will respond to the product.


My phone can only do EDGE and I don't pay for data. My laptop has the microphone and webcam disconnected internally.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: