Open .docx file, save as Markdown to nicely preserve things like headings, bold, etc. I moderately frequently have reason to want to go .docx to .md because I have a lot of ediing/rewriting to do and I'd rather work in Emacs than LibreOffice Writer.
Markdown is a text markup language, of course. The output does not look like the input (unless you want to read raw text Markdown, in which case LibreOffice works fine).
How would that look in a single-pane, edit-in-place, wysiwyg editor? Where would you type the input, and where and when would it show the output?
Both the books and the song analogies are incorrect. In the case of code, the users for whom the programmes are written, are not engaging with the statements of the code, they are interacting with interfaces the programmes provide.
This is not the same when it comes to books and music.
This is extremely valuable. Every time we get a problem which we are not able to reproduce, usually an extreme edge case, we end up getting our entire production DB replicated to get to the error.
Looks to me that the issue is with the PR process, not with open-source.
From the article -
> It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.
I don't have a solution for this, I'm pointing to the flaw in the assumption that AI is destroying open-source.
The solution is forking. Make a fork, update it to your heart's content. If it is found to be solid later, perhaps it will be studied and forked itself.
That we embrace it generally. Even just proposing a naming convention would allow for agents to find the AI-sanctioned branch (or create it) and have at it.
(Maybe some AI agents can collaborate on "AILinux" and we can see how it measures up, ha ha.)
Maintainer could just say "No AI please" and refuse PRs that they judge are probably AI. The AI operator can figure out how to make a fork if that's what they want. But they probably don't want that, so no point anybody else creating a system that nobody wants and nobody will use.
> Coding agents are designed to be accommodating, it doesn’t push back against prompts since it neither has the authority nor the context to do so. It may ask for clarifications upon what was specified, but it won’t say “wait, have you considered doing X instead?” A human developer would, or at least, they’d raise a flag. An LLM produces plausible output and moves on.
> This trait may be desirable as a virtual assistant, but it makes for a bad engineering teammate. The willingness to engage in productive conflict is part and parcel to good engineering: it helps broaden the search in the design space of ideas.
Whenever non-technical people ask me about LLMs, I tell them this -
The goal of an LLM is not to give you correct answers. The goal of an LLM is to continue the conversation.
> The goal of an LLM is to continue the conversation.
It’s even simpler. The goal of an LLM is to generate the next token.
That’s reductive but worth considering. An LLM doesn’t have inherent goals and you aren’t privy to how it was post-trained or what on, so you can’t assume it’ll behave in any particular way.
If you select Search > Advanced from the menu, you get a window where you can enter the content to search for. This is available in the normal as well as the alpha version.
reply