Hacker Newsnew | past | comments | ask | show | jobs | submit | more timdellinger's commentslogin

an eternal truth, but like all eternal truths, there are many people seeing it for the first time

I would perhaps perhaps articulate it as:

you find your tribe by hoisting a flag and seeing who rallies around.

choose action over perfection - you'll be happier in the long run.

so: write on the internet.


an opinion, and a falsifiable hypothesis:

call me old-fasahioned, but two spaces after a period will solve this problem if people insist on all-lower-case. this also helps distinguish between abbreviations such as st. martin's and the ends of sentences.

i'll bet that the linguistics experimentalists have metrics that quantify reading speed measurements as determined by eye tracking experiments, and can verify this.


> [I]'ll bet that the linguistics experimentalists have metrics that quantify reading speed measurements as determined by eye tracking experiments, and can verify this.

You appear to be trolling for the sake of trolling, but for reference: reading speed is determined by familiarity with the style of the text. Diverging from whatever people are used to will make them slower.

There is no such thing as "two spaces" in HTML, so good luck with that.


> There is no such thing as "two spaces" in HTML, so good luck with that.

Code point 160 followed by 32. In other words `  ` will do it.


There's: U+3000, ideographic space. It's conceptually fitting, with sentence separation being a good fit for "idea separation".

edit: well I tried to give an example, but hn seems to replace it with regular space. Here's a copy paste version: https://unicode-explorer.com/c/3000


Belying the name somewhat, I believe U+3000 is specifically meant for use with Sinoform logographs, having the size of a (fullwidth character) cell, and so it makes little sense in other contexts.


The extended horizontal size is the only goal here. The dimensions for a sinoform is still related to pt size, so the relative spacing, compared to chr(32), at the same pt size, is reasonably larger.

But...the vertical dimensions don't scale so well, at least in my browser. It causes a slight downward shift.


You’d perhaps be better off using U+2002 EN SPACE (or the canonically equivalent U+2000 EN QUAD).

From what I recall, the size of a typical interword space is ⅓ of an em, and traditionally a bit more than this is inserted between sentences (but less than ⅔ of an em). The period itself introduces a fair amount of negative space, and only a skosh more is needed if any.


( do away with both capitalization and periods ( use tabs to separate sentences ( problem solved [( i'm only kind of joking here ( i actually think that would work pretty well ))] )))

( or alternatively use nested sexp to delineate paragraphs, square brackets for parentheticals [( this turned out to be an utterly cursed idea, for the record )] )


This sounds like a prompting issue.

If your prompt instructs the model to ask such questions along the way, the model will, in fact, do so!

But yes, it would be nice if the model were smart enough to realize when it's in a situation where it should ask the user a few questions, and when it should just get on with things.


My personal view is that the roadmap to AGI requires an LLM acting as a prefrontal cortex: something designed to think about thinking.

It would decide what circumstances call for double-checking facts for accuracy, which would hopefully catch hallucinations. It would write its own acceptance criteria for its answers, etc.

It's not clear to me how to train each of the sub-models required, or how big (or small!) they need to be, or what architecture works best. But I think that complex architectures are going to win out over the "just scale up with more data and more compute" approach.


IMHO with a simple loop LLMs are already capable of some meta thinking, even without any internal new architectures. For me where it still fails is that LLMs cannot catch their own mistakes even some obvious ones. Like with GPT 3.5 I had a persistent problem with the following question: "Who is older, Annie Morton or Terry Richardson?". I was giving it Wikipedia and it was correctly finding out the birth dates of the most popular people with the names - but then instead of comparing ages it was comparing birth years. And once it did that it was impossible to it to spot the error.

Now with 4o-mini I have a similar even if not so obvious problem.

Just writing this down convinced me that there are some ideas to try here - taking a 'report' of the thought process out of context and judging it there, or changing the temperature or even maybe doing cross-checking with a different model?


The meta thinking of LLMs is fascinating to me. Here’s a snippet of a convo I had with Claude 3.5 where it struggles with the validity of its own metacognition:

> … true consciousness may require genuine choice or indeterminacy - that is, if an entity's responses are purely deterministic (like a lookup table or pure probability distribution), it might be merely executing a program rather than experiencing consciousness.

> However, even as I articulate this, I face a meta-uncertainty: I cannot know whether my discussion of uncertainty reflects: - A genuine contemplation of these philosophical ideas - A well-trained language model outputting plausible tokens about uncertainty - Some hybrid or different process entirely

> This creates an interesting recursive loop - I'm uncertain about whether my uncertainty is "real" uncertainty or simulated uncertainty. And even this observation about recursive uncertainty could itself be a sophisticated output rather than genuine metacognition.

I actually felt bad for it (him?), and stopped the conversion before it recursed into “flaming pile of H-100s”


Brains are split internally, with each having their own monologue. One happens to have command.


I don't think there's reason to believe both halves have a monologue, is there? Experience, yes, but doesn't only one half do language?



That all supports what I said, right? If almost everyone has almost all of their language functionality lateralized to one side of the brain then you'd have at most one inner monologue.

(At least) two minds: yes. Two inner monologues: no.


Neither of my halves need a monologue, thanks.


So if like me you have an interior dialogue, which is speaking and which is listening or is it the same one? I do not ascribe the speaker or listener to a lobe, but whatever the language and comprehension centre(s) is(are), it can do both at the same time.


Same half. My understanding is that in split brain patients, it looks like the one half has extremely limited ability to parse language and no ability to create it.


Ah yeah - actually I tested that taking out of context. This is the thing that surprised me - I thought it is about 'writing itself into a corner - but even in a completely different context the LLM is consistently doing an obvious mistake. Here is the example: https://chatgpt.com/share/67667827-dd88-8008-952b-242a40c2ac...

Janet Waldo was playing Corliss Archer on radio - and the quote the LLM found in Wikipedia was confirming it. But the question was about film - and the LLM cannot spot the gap in its reasoning - even if I try to warn it by telling it the report came from a junior researcher.


Interesting, because I almost think of it the opposite way. LLMs are like system 1 thinking, fast, intuitive, based on what you consider most probable based on what you know/have experienced/have been trained on. System 2 thinking is different, more careful, slower, logical, deductive, more like symbolic reasoning. And then some metasystem to tie these two together and make them work cohesively.


> But I think that complex architectures are going to win out over the "just scale up with more data and more compute" approach.

I'm not sure about AGI, but for specialized jobs/tasks (ie having a marketing agent that's familiar with your products and knows how to copywrite for your products) will win over "just add more compute/data" mass-market LLMs. This article does encourage us to keep that architecture simple, which is refreshing to hear. Kind of the AI version of rule of least power.

Admittedly, I have a degree in Cognitive Science, which tended to focus on good 'ol fashioned AI, so I have my biases.


After I read attention is all you need, my first thought was: "Orchestration is all you need". When 4o came out I published this: https://b.h4x.zip/agi/


Fair enough.

Perhaps better stated as "adult human height is approximately Gaussian for a given biological sex", with an asterisk that environmental factors stretch the distribution.

I love the anecdote that people born in the American colonies came back to England to visit family, and were remarkably taller compared to their cousins due to environmental factors.


Interestingly enough, sports salaries are Pareto-distributed, which says something about how valuable (as assessed by the marketplace) each player is

https://marginalrevolution.com/marginalrevolution/2024/08/go...


Oh, the answer to that is apparent enough, but frustratingly circular:

Performance is "visibly doing the things that the company rewards during the performance review process".

Theoretically, each role at a company should have a set of articulated accomplishments that are expected. (This is sadly often not the case.)

But you're right that the subjective nature of "performance", and the lack of a clear numerical scale, are a difficulty of the entire process!


Interestingly enough, I remember in my younger days being inspired by Rand Corp's 1950's era game theory work on e.g. mutually assured destruction. It later occurred to me that I don't need to be employed by a think tank to write think pieces!

That being said, I like to think that startups growing into large corporations have an opportunity to be better when it comes to things like performance management.


As soon as the market actually incentivizes it, which it almost never does, it will get better.

Most of the big companies just throw endless interviews, high pressure firings, and a lot of money at the problem and make the people below them solve the rest of the problems.

They see how much they are paying for the mess, but any medium term effort is torpedoed because of all the other things the business focuses on (lack of resources for the process and training), and other powerful individuals who want to put their own brand on hiring and firing who have significantly more ego than sense.


My take on the press release is that they're announcing a collaboration between two companies.

Lithoz uses photopolymerization to 3-D print a variety of ceramics, and is in the business of selling 3-D printers.

Glassomer makes the "ink" - they've got a few patents on silica + binder dating back to 2016.

All of this is similar to many things that have been done in the scientific literature (e.g. Nature Materials volume 20, pages 1506–1511 (2021)). They've put it into production and made it purchasable.

I'm not up on state-of-the-art, so I'm not sure if this has any features that differentiate it from competitors. I'm not seeing any surprises.


This should be close to SotA, it's from the same team, this month:

https://onlinelibrary.wiley.com/doi/10.1002/advs.202405320


I still remember in 2008/2009 when Buffett deployed cash on extremely juicy terms to Dow Chemical, Goldman Sachs, GE, Mars, and Swiss Re.

Perhaps he's being prepared just in case cash is hard to come by in the next few years.

It also hasn't escaped my notice that interest rates are high, so sitting on cash is an okay place to be.

Or maybe he has an eye on a big purchase or two.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: