right now it's hardcode - you write a web automation script where at some parts you have a very specialized agent to take the very few steps in your workflow that require reasoning/adaptability. Future: we're trying to make this process automatic.
think the visa hurdle is the big one. even if you have a strong background, a lot of companies hesitate unless they already have an immigration pipeline set up. another angle could be looking for remote roles at US companies first, then trying to convert that into relocation later. a bit longer path but sometimes more realistic.
good luck.
I think the UX of chatgpt works because it's familiar, not because it's good. Lowers friction for new users but doesn't scale well for more complex workflows. if you're building anything beyond Q&A or simple tasks, you run into limitations fast. There's still plenty of space for apps that treat the model as a backend and build real interaction layers on top — especially for use cases that aren’t served by a chat metaphor
I wouldn't call it familiar, it's a weird quasi-chat. They didn't even do the chat metaphor right, you can't type more as the AI is thinking. Nor can you really interrupt it when it's off over explaining something for the 20th time without just stopping it.
It's missing obvious settings, has a weird UX where every now and mysterious popups will appear like 'memory updated', or now it spews random text while it's "thinking", it'll every now and then ask you to choose between two answers but I'm working so no thanks, I'm just going to pick one at random so I can continue working.
People had copy pasta templates they dropped into every chat with no way of savings Ng thatz they they added a sort of ability to save that but it worked in a inscrutable and confusing manner, but then they released new models that didn't support that and so you're back to copy pasta, and blurgh.
It's a success despite the UI because they had a model streets ahead of everyone else.
funny enough, i started noticing em dashes mostly through using GPT. wasn’t really part of my writing before, but now i find them super useful for managing rhythm and flow.
definitely earned their place — not because LLMs use them, but because they actually work.
(says ChatGPT in response to this post)
this is one of the more compelling "LLM meets real-world tool" use cases i've seen. openSCAD makes a great testbed since it's text-based and deterministic, but i wonder what the limits are once you get into more complex assemblies or freeform surfacing.
curious if the real unlock long-term will come from hybrid workflows, LLMs proposing parameterized primitives, humans refining them in UI, then LLMs iterating on feedback. kind of like pair programming, but for CAD.
Complex assemblies completely fall on their face. It's pretty fun/hilarious to ask it to do something like: "Make a mid-century modern coffee table" -- the result will have floating components, etc.
Yes to your thought about the hybrid workflows. There's a lot of UI/UX to figure out about how to go back and forth with the LLM to make this useful.
This is kind of the physical equivalent of having the model spit out an entire app, though. When you dig into the code, a lot of it won't make sense, you'll have meat and gravy variables that aren't doing anything, and the app won't work without someone who knows what they're doing going in and fixing it. LLMs are actually surprisingly good at at codeCAD given that they're not trained on the task of producing 3d parts, so there's probably a lot of room for improvement.
I think it's correct that new workflows will need to be developed, but I also think that codeCAD in general is probably the future. You get better scalability (share libraries for making parts, rather than the data), better version control, more explicit numerical optimization, and the tooling can be split up (i.e. when programming, you can use a full-blown IDE, or you can use a text editor and multiple individual tools to achieve the same effect). The workflow issue, at least to me, is common to all applications of LLMs, and something that will be solved out of necessity. In fact, I suspect that improving workflows by adding multiple input modes will improve model performance on all tasks.
exactly. hindsight bias makes it really hard to separate genuine inference from subtle prompt leakage. even framing the question can accidentally steer it toward the right answer. would be interesting to try with completely synthetic problems first just to test the method.
same here. brew’s been great historically but it’s gotten bloated and kinda slow. curious to see if sapphire can keep things lean without sacrificing compatibility.
yeah, that'd b nice, some kind of self-bootstrapping system where you start with a strong cloud model, then fine-tune a smaller local one over time until it’s good enough to take over. tricky part is managing quality drift and deciding when it's 'good enough' without tanking UX. edge hardware's catching up though, so feels more feasible by the day.