Hacker Newsnew | past | comments | ask | show | jobs | submit | aaroninsf's commentslogin

Related: https://beadingwithalgorithms.org/?rule=[0101100021101010100...

https://mathstodon.xyz/@gwenbeads/115968206227675487

Gwen Fisher was at the Exploratorium After Dark event ten days ago running an activity where you could sit down and implement some of her CA rules on hex paper with markers.

It was fun, surprising, and beautiful.


this one uses some sort of quantum effect processing with an image processing feedback to control a couple of indipendent knitting attachments

https://idknitthatco.com/


And,

however much atonality and other formalisms represented an intellectual inevitability, they also are ultimately useful mostly for having mapped a good bit of the coastline defining where the experience and enjoyment of musicality is grounded in ways which are obviously embedded in both physics and our particular embodiment, and to lesser degree, culture.

Jazz did a much more nuanced mapping of that ground IMO, but to the same end result: beyond the coast there is deep water, and there we do not swim.

Nor shall we, the collective. Not so long as we live in these bodies.

Individuals can swim; individuals can endeavor or through some rare combination of circumstance find musical value and enjoyment in the water, i.e. beyond conventional melody harmony and rhythm...

...but no amount of intellectual scaffolding or historical cultural momentum can bridge it.

Humans cluster inland.

I've spend decades in the experimental sound/music community and mapped some largely unvisited coves myself, having a particular interest in what in those intellectual traditions was called musique concrete;

and been to countless "noise" shows, and lived through many generations now of enthusiastic "kids" rediscovering various aesthetics.

The lines don't budge. The cultural framing of what it means to transgress them, and the communities that form around celebration of that "transgression," are all unique in their specific concerns, and—unhappy in the same way.

Minimalism was a welcome success for pretty obvious reasons: it was a reversion and embrace of exactly those things at the heart of our embodied experience of music.


Almost everyone I know is limited to two areas, and of those, 90% are in one corner.

The incentive of the human who deployed it—at one remove or another—would require knowing more. But the more likely cases are easy to guess at, e.g., someone is playing with OpenClaw. I'd guess "someone is playing with OpenClaw and intends to write something about it boost their brand, could be a Show HN could be a LinkedIn screed they hope goes viral."

Could be for fun. I remember fun.


Nothing enrages like change.

The key word in the HN headline is _can_.

Humans are not judged on the basis of what they _can_ do.

Reasoning about how to constrain tools on the basis of what they _could_ do, if e.g. used outside their established guardrails, needs to be very nuanced.


Correct; the ability of a model to reproduce source material verbatim does not necessarily make the model's existence illegal. However, using a model to do just that might very well present a legal liability for the user. I would be interested to see the extent to which models can "recite from memory" source code, e.g., from the various MS code leaks. Put another way, if I'm using LLM code generation extensively, do I need to run a filter on its output to ensure that I don't "accidentally" copy large chunks of the Windows codebase?


With Twain in mind, might I suggest we adopt the simple expedient of snake casing such terms.


Finally, someone who actually thought about where to draw the line instead of rejecting words with spaces entirely.


Ximm's Law applies to the "plateau" of 10%

In other words: notionally, if not literally, by the time trailing numbers are collected they are out of date.

This is of course axiomatic, but, that staleness is a serious matter in this particular moment.

It's a cliché that six months can be a lifetime on the bleeding edge of tech.

This is the first time in my career that is more or less literally true.

Humans reason poorly with non-linear change.

This entire article is a demonstration of that.


why have I not seen the butterfly meme yet


Setting aside the marvelous murk in that use of "you," which parenthetically I would be happy to chat about ad nauseum,

I would say this is a fine time to haul out:

Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.

Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.

Lemma: contemporary implementations have already improved; they're just unevenly distributed.

I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."

I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.


"every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon."

They're token predictors. This is inherently a limited technology, which is optimized for making people feel good about interacting with it.

There may be future AI technologies which are not just token predictors, and will have different capabilities. Or maybe there won't be. But when we talk about AI these days, we're talking about a technology with a skill ceiling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: