Hacker Newsnew | past | comments | ask | show | jobs | submit | sheept's commentslogin

CJK text is typically rendered as 2 columns per character, but in general this is dependent on the terminal emulator

Why? I don't see this pedantry for headlines for other countries like China did this, the UK does that. I think it's well understood that it's referring to the government, not a generalization of its people.

My experience is the exact opposite. It is one of the most common points of pedantry I see in controversial political threads, across nations.

Not for no reason either. Turnout was 64.1%, so really it's the active decision of 31.9218% of voters (voting eligibles) culminating in this. Kind of a pattern with modern democracies if you check.

Not that passively endorsing this by not voting when the opportunity was there would be much better though.


I hate this line of reasoning. People who didn't vote are equally guilty, because they did not care enough to show up. Or, maybe, they just didn't make it to polling station on time for some reason (having to pick up kids from school, or working second shift or something). You should always assume that the result of the elections is representative of what society thinks. That's how elections (and opinion polls, for that matter) work. Unless you have a really good proof why some minority group was actively excluded from voting.

There is actually extensive mathematical history to fair voting, the output of which is super not in use, and of which I do find plenty of the alternative systems more representative:

https://youtu.be/qf7ws2DF-zk

I do think regular variety elections are generally representative though. I just also see value in keeping these asterisks in mind.


I'm not sure I'd use the word "guilty" - that suggests some wrong doing.

However I agree with your premise - trying to remove abstaining voters from the math is incorrect. Abstainers are explicitly making their view known.

That view is "I don't care, but are equally good or bad". (Which in turn demonstrates a profound ignorance of what's going on - and frankly folk that unconcerned should probably not pick a side.)

I believe it's fair to say "America voted for this". America is a democracy and the voters spoke. Of course it's not unanimous but majority rules.

And it's not like his campaign was disingenuous. The man was on display, and most of the things he's done were signaled clearly in the campaign. (He's long been against foreign wars, so the Iran debacle seems out of character, but then again it's in line with his dictator instincts, and he desperately needs a distraction from the Epstein files.)


Many people don’t vote because it is difficult for them, they don’t see a difference in their lives because they get screwed one way or the other no matter who is in power, and if you’ll recall the last administration was complicit in genocide which is why I voted third party.

It’s true trump is bad but so is genocide. Really hard to make the case of the lesser evil when it’s just variations on top tier criminality. You have to offer something to voters.


Yes many people don’t vote because of deliberately fettered access to polling and/or a generally correct understanding that the electoral college nullifies or makes redundant their vote in their jurisdiction. Your vote for a third party is a signal but essentially a qualified abstention. Your high horse however is so misguided and absurd- to suggest that you held a moral high ground because the Biden administration supported the Gaza genocide is flatly wrong. If you want to place blame for that administration’s actions, blame Citizen’s United, blame AIPAC, blame the DNC, etc. And write letters, protest, get mad. But facilitating the ascent of what is objectively, obviously, candidly worse to make that statement is insulting to the intelligence of anyone to whom you make the argument. Perhaps your vote was in a jurisdiction where you could assume the electoral votes would go to the Dems anyway, but that just makes it flat out virtue signaling. The left will continue to cut off its nose to spite its face to the peril of US democracy and world peace. You nailed em tho.

Trump's exceptional, isn't he? He explicitly only governs for his base, and he's explicitly against those outside his base. Sure, he won a slim majority, but it's understood that democratically elected rulers govern all their citizens, if only to prevent electoral violence.

It's not really paranoia if it's happening a lot. They wrote a blog post calling several major Chinese AI companies out for distillation.[0] Perhaps it is ironic, but it's within their rights to protect their business, like how they prohibit using Claude Code to make your own Claude Code.[1]

[0]: https://www.anthropic.com/news/detecting-and-preventing-dist... [1]: https://news.ycombinator.com/item?id=46578701


And conveniently left out they themselves distilled DeepSeek for chinese content into their model....

Their business shouldn't exist. It was predisposed on non-permissive IP theft. They may have found a judge willing to cop to it not being so, but the rest of the public knows the real score. And most problematically for them, that means the subset of hackerdom that lives by tit-for-tat. One should beware of pissing off gray-hats. Iit's a surefire way to find yourself heading for bad times.

As a reviewer, I do care. Sure, people should be reviewing Claude-generated code, but they aren't scrutinizing it.

Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.

But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.

Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.


I agree with a lot of this, but thats kind of my point: if all these things (poor tests, non-DRY, redundant comments, etc) were true about a piece of purely human-written code then I would reject it just the same, so whats the difference? Likewise, if claude solely produced some really clean, concise and rigorously thought-through and testsed piece of code with a human backer then why wouldn't I take it?

As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.

I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.

The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".

(But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)


One feasible scenario could be that they are working on/experimenting with ads, and it was put behind a feature flag, but for whatever reason it was inadvertently ignored

That’s not implementing it by accident, that’s deliberate. In such a scenario perhaps the deployment was a mistake, but if you don’t write the malware in the first place, it can’t be deployed. (Probably. This is LLM stuff we’re talking about.)

(Yes, this is malware. It’s incontrovertibly adware, and although some will argue that not all adware is malware, this behaviour easily meets the requirements to be deemed malicious.)

It is said, never point a gun at something you’re not willing to shoot. Apply something similar here.


Creating 3D scenes with CSS has always been possible[0], but like this project, it's required JavaScript for interactivity.

But there's a lot more CSS features now. While in the past, Turing completeness in CSS required humans to click on checkboxes, now CSS can emulate an entire CPU without JavaScript or requiring user interaction.[1] So I wonder if DOOM could be purely CSS too, in real time.

[0]: https://keithclark.co.uk/labs/css-fps/ [1]: https://lyra.horse/x86css/


The author links to th CSS x86 project:

> Yes, Lyra Rebane build a x86 CPU completely in CSS, but that technique is simply not fast enough for handing the game loop. So the result is something that uses a lot of JavaScript.


(author of x86css)

not only do i think doom in css is possible, but both me and another css person were also planning on actually making it into reality

but it sort of feels demotivating to see js-powered css projects like this hit the frontpage, because if we do eventually make a css-only doom people will think its a repost or nothing special

edit: and to be clear, that demotivation is more of a problem of how the internet, virality, and news cycles work. the actual project here is still pretty cool!


I feel obliged to repeat my assertion that this evolution of CSS was inevitable and foreseeable and that the HTML Editorial Review Board should’ve chosen DSSSL in the first place.

If you look at the code above, the temperature and weather are selected independently randomly. That alone is not indicative of AI-generated code; a human could write something similar for demo/learning purposes.

> LLMs return malformed JSON more often than you'd expect, especially with nested arrays and complex schemas. One bad bracket and your pipeline crashes.

This might be one reason why Claude Code uses XML for tool calling: repeating the tag name in the closing bracket helps it keep track of where it is during inference, so it is less error prone.


Yeah that's a good observation. XML's closing tags give the model structural anchors during generation — it knows where it is in the nesting. JSON doesn't have that, so the deeper the nesting the more likely the model loses track of brackets.

We see this especially with arrays of objects where each object has optional nested fields. For complex nested objects, the model can get all items well formatted but one with an invalid field of wrong type. That's why we put effort into the repair/recovery/sanitization layer — validate field-by-field and keep what's valid rather than throwing everything out.


Unless I'm totally misunderstanding something it's not xml but special tokens for the tokenizer someone smarter than me might know https://medium.com/@nisarg.nargund/why-special-tokens-matter...

Not in Claude Code, where asking it to print the XML used for tool calling makes it accidentally trigger the tool call

Hardly matters, this isn't a problem that you'd have these days with modern LLMs.

Also, a model can always use a proxy to turn your tool calls into XML

And feed you back json right away and you wouldn't even know if any transformation did take place.


We do see fewer invalid JSONs on latest bigger LLMs but still can happen on smaller and cheaper models. There is also case when input is truncated or a required field not found, which are inherently difficult.

On XML vs JSON, I think the goal here is to generate typed output where JSON with zod shines - for example the result can type check and be inserted to database typed columns later


Thing is even with XML LLM will fail every now and then.

I've built an agent in both tool calling and by parsing XML

You always need a self correcting loop built in, if you are editing a file with LLM you need provide hints so LLM gets it right the second time or 3rd or n time.

Just by switching to XML you'll not get that.

I used to use XML now i only use it for examples in in system prompt for model to learn. That's all


Agreed - in this project I did a one path sanitation to recover invalid optional / nullable fields or discard invalid objects in nested array.

I know multi path LLM approaches exist: e.g. generating JSON patches

https://github.com/hinthornw/trustcall


Not only that, but LLMs do a disservice to themselves by writing inconcise code, decorating lines with redundant comments, which wastes their context the next time they work with it

I have had good luck in asking my agent 'now review this change: is it a good design, does it solve the problem, are there excessive comments, is there anything else a reviewer would point out'. I'm still working on what promt to use but that is about right.

"Ours" and "theirs" make sense in most cases (since "ours" refers to the HEAD you're merging into).

Rebases are the sole exception (in typical use) because ours/theirs is reversed, since you're merging HEAD into the other branch. Personally, I prefer merge commits over rebases if possible; they make PRs harder for others to review by breaking the "see changes since last review" feature. Git generally works better without rebases and squash commits.


Wow, interesting to see such a diametrically opposed view. We’ve banned merge commits internally and our entire workflow is rebase driven. Generally, I find that rebases are far better at keeping Git history clean and clearly allowing you to see the diff between the base you’re merging into and the changes you’ve made.

"Clean" is not the same as "useful". You have to be really, really disciplined to not make a superficially looking "clean" history which may appear linear but which is actually total nonsense.

For example, if one is frequently doing "fix after rebase" commits, then they are doing it wrong and are making a history which is much less useful than a seemingly more complicated merge based history. Rebased histories are only clean if they also tell a true story after the rebase, but if you push "rebase fixes" onto the end of your history, then it means that prior rebased commits no longer make any sense because they e.g. use APIs that aren't actually there. Giving up and squashing everything to one commit is almost better in this case because it at least won't throw off someone who is trying to make sense of the history in the future.

I think that rebasing has won over merges mostly because the tools for navigating git histories suck SO HARD. I have used Perforce at a previous job, and their graphical tools for navigating a merge based history are excellent and were really useful for doing code archeology.


Generally our pattern is that every PR gets rebased into sensible commits. So in a way we are doing "squash commits" but the method is an interactive rebase. This keeps our history very pretty and clean, and simultaneously easy to grok and navigate.

My favorite git GUI is Sublime Merge.


Yes, I prefer that approach as well because it allows the person who authored the change to do all the work of deciding how to resolve conflicts up front (and allows reviewers to review that conflict resolution) instead of forcing whoever eventually does the merge to figure everything out after the fact. It also removes conflicts from the history so you never have to think about them later after the rebase/merge process is finished.

> Git generally works better without rebases and squash commits.

If squash commits make Git harder for you, that's a tell that your branches are trying to do too many things before merging back into main.


I don't know. Even when I'm working on my own private repositories across several machines, I really, really dislike regular merges. You get an ugly commit message and I can never get git log to show me the information I actually want to see.

For me, rebasing is the simplest and easiest to understand, and it allows you to squash some of your commits so that it's one commit per feature / bug-fix / logical unit of work. I'll also frequently rebase and squash commits in my work branch too, where I've temporarily committed something and then fixed a bug before it's been pushed into main, I'll just reorder and squash the relevant commits into one.


I completely agree, since doing rebase our history looks fantastic and it makes finding things, cherrypicking and generating changelogs really simple. Why not be neat, it's cost us nothing and you can make yourself a tutorial on Claude if you don't understand rebasing pretty easily.

Don't do squash commits, just rebase -i your branch before merging so you only have one commit. It's pretty trivial to do.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: