Hacker Newsnew | past | comments | ask | show | jobs | submit | alephnerd's commentslogin

> use AI to write this article

While I understand you retracted your assumption that someone used AI to write their response, I feel the increasingly gratuitous leveling of "AI Ghostwriting" accusations is detrimental to HN and writing as a whole - plenty of humans can write write in cohesive passive tense (and in fact, plenty of us who did really well in our writing classes do so), and more critically, if the underlying thesis and argument provided by the article holds true who cares if it's written by a human or AI?

And more fundamentally, ghostwriting has been the norm for decades, and something being ghostwritten by AI or Humans makes no difference.


I'm still accusing her of using AI.

And to AI-speak it: It's not the fact that she's using a computer I dislike. It's the fact that this paragraph structure is now overused and is aggravating.


Quality of writing matters cause it affects the comfort of reading. Slop remains slop even if the argument it holds is ok. Like here for example one question is posed and answered twice within two paragraphs distance. It reads weird, and I wouldn't expect it from a human.

> Slop remains slop even if the argument it holds is ok

This sounds more like a religious argument than a logical one, tbh

At what point does it turn into a cult?


Dunno, I made a concrete point which you ignored.

I feel like it's appropriate to call out slopstyle articles. Truthful or not, slopstyle should be discouraged, same as Linkedinfluencer style should be discouraged. Neither style encourages succint communication.

How do people define what is slop and what isn't?

This attempt could be refined but it's a decent start:

> [...] having a polished appearance but lacking originality. None of the points it makes are novel and it doesn't connect them in novel ways either.

https://lobste.rs/c/qtolag

I don't think this post qualifies though. It's a press release, not an article from Quanta Magazine.

> [...] if the underlying thesis and argument provided by the article holds true who cares if it's written by a human or AI?

> [...] something being ghostwritten by AI or Humans makes no difference.

I don't think AI-generated writing is at that level yet. But it's getting close.

"Jimi Hendrix Was a Systems Engineer": https://spectrum.ieee.org/jimi-hendrix-systems-engineer

I'm probably the only person who thinks that this was written with an LLM (Claude). The code supporting it likely was too. The people who talk about "taste" being the last defense against AI aren't wrong and I think that that topic, along with a lot of others that are essentially of a philosophical import are beyond the ambit of what most people want to discuss when they criticize AI generated content. We can only wave them off for so long.


> This attempt could be refined but it's a decent start:

>> [...] having a polished appearance but lacking originality. None of the points it makes are novel and it doesn't connect them in novel ways either.

I have some bad news for you. Not every human is a Mozart or an Einstein. The long tail of human output has plenty of examples of the lack of originality, from bodice-rippers and pulp paperbacks and sloppy 'journalism' and 'style' magazines and articles, to carbon-copy soldiers, children in school uniforms, derivative music and film, the 5-minute Bruce Willis vehicles at the tail end of his career (though he clearly had a very good reason to make those), the cookie cutter quick-fab homes in American sub-divisions, cogs in human machines and systems of all sorts, the banality of life itself (at times) ...


Taken to its extremes this rebuttal could qualify as "slop" according to the Lobste.rs comment I sourced that definition from. I'm not even trying to be snarky. This is almost an exact reiteration of a response to the linked comment.

20% of your response is just a reiteration of one that was made to the original comment that I linked to. As far as the remaining 80% goes, it's something to think about but I'm not sure what your own point is. Do you hold any of the things you named dear to you enough to not call them "slop"?


LOL

You must be new around here.

Since you sound sincere: "I have bad news for you" and its variant "Boy, do I have some bad news for you" are a rhetorical 2000s internet-specific stock reply format with a dry, corrective, often smug setup that means "your assumptions are wrong". More recently, it got turned into memes.

In this specific case the unstated assumption being that human output inherently bears originality, as opposed to AI output.

Proper use of that phrase is an art form, a rhetorical flourish reserved for use by those skilled in the art of the Internet put-down, and elicits a soft knowing smile in those that enjoy banter. :-) Obviously, @mjec on lobste.rs is one so skilled.

That you failed to recognize it, twice, marks you as human, and one that's not very savvy in the ways of the Internet, or able to distinguish slop from art. Any decently trained AI would have recognized it immediately.

And no, I don't hold any of those things I named dear enough to not call them slop, because, dude, I did call them slop...

LOL

This is too much fun. Sorry that it came at your expense, I guess.

I'll have to play with the 2B and 9B models to see if they fail to recognize the phrase. The bigger models all recognize it.

LOL

Now get off my lawn ;-)

PS: that burning sensation you likely feel around your ears is the subliminal recognition that, in this exchange, ai is winning out over a human, and it's not even present...

Again, apologies that my merriment is at your expense. Hopefully you don't take yourself too seriously :-)


Just because you can't tell that a cup of coffee is exactly at 157F doesn't mean that you can't tell if it's too hot.

Next time I will use phrases like "long winded", "too vague", "never seems to come to the point" rather than "This article seems like AI slop".


> some of my classes are part of the cybersec curriculum

> as far as i have been able track (linkedin, email, etc.) roughly 3/4 of the previous graduating cybersec class has been unable to get a job in cybersec. probably 1/2 of those are struggling to find even basic sysadmin or password-resetter positions.

What is the curriculum that is being taught in your program?

If it's "how to be a Splunk or Crowdstrike" admin or "how to be an L1 SOC" I don't think that is a hireable skill at this point.


>If it's "how to be a Splunk or Crowdstrike" admin or "how to be an L1 SOC" I don't think that is a hireable skill at this point.

its not, and up until recently (~2 years or so), the majority of our graduates were instantly picked up.


What is the curriculum though - you don't need to send me the name of the institution but I've been a hiring manager in the space and a PM for some of the larger companies and I haven't been impressed by "Cybersecurity" bootcamps or degree holders unless they also had a tangible track record (eg. HackerOne).

I feel a lot of hiring reflects that as well now - if I want a SWE to build a runtime agent I'm better off hiring a new grad from UC Berkeley who took CS162 and CS161 versus someone who took a summary course but doesn't understand how ld_preload works. Similarly, if I was doing AppSec for WebApps/OWASP I'd rather hire someone with an actual bounty track record on HackerOne instead of a bootcamp grad and potentially even a degree holder.

My best hiring pipeline have either been Vets who were in a Cyber MOS with a couple years of hands-on experience and then did a WGU type program (the WGU program was just a checkbox for HR) or successful bounty hunters with a strong track record on HackerOne or Cobalt.


i have no arguments with anything you have said here. but none of it really explains how we went from most kids being hired directly into the industry a few years ago to only a few of them now. our curriculum has not changed enough in the last few years for the curriculum to be the culprit.

we understand the importance of meeting the employers where they are at, so once a year we meet with ~15 industry partners (people in your position) and ask them directly questions like: "of your recent hires, what are they missing?", "what specific skills do you think needs more focus?", etc. that informs any changes we make for the following year. we have dropped entire courses and spun up new ones solely from industry input.

we also understand the importance of hands-on experience. it is probably the most common feedback we get from people in your position. we have a giant lab so kids get experience wiring up and configuring real physical appliances instead of doing it all in packet tracer or whatever. we have a bug bounty club, we attend and host hackathons, etc. courses are split roughly 50/50 between theory classes and practical classes. practical courses are mostly focused on "fix this shitty/vulnerable implementation of X" or "here is an existing environment, propose and then implement something that addresses X problem in the least-disruptive way" rather than "here is a fresh start, implement X in this perfect environment".

i dont want to give too much detail (e.g. course names and progression), as i would probably end up doxxing myself. but as someone who started off in the industry and then moved to a teaching position later in life, i am 100% with you. people who have real experience (e.g. a vet with cyber experience) are almost always going to be a better hire than a fresh graduate (i think this is true in any industry, and has always been true -- so it doesnt explain the change). but my job is to try and close that gap, and i think we have made good progress along that path. we are absolutely not a 6 month money-grab program.


A major issue I feel has been a proliferation of lower quality programs charging a premium price as well.

It's become harder to vet undergrads in the US for specific subfields because of either a lack of preparation or subpar career services.

Additonally, at least in CS/CE the number of candidates have skyrocketed, but the reality is most companies can limit new grad hiring to 10-20 target programs nationally and 2-3 local programs and get the talent pipeline they need.


> Additonally, at least in CS/CE the number of candidates have skyrocketed, but the reality is most companies can limit new grad hiring to 10-20 target programs nationally and 2-3 local programs and get the talent pipeline they need.

Why? In my opinion a new grads can do good work and they take very low salary.


> can do good work

Doesn't mean will, and if their work isn't good the low salary isn't particularly beneficial.


People don't want to have to manage or derive context from searching across multiple different apps, emails, spreadsheets, CRMs, etc. It's like having your own personal assistant or oracle.

I find it interesting how European and American devs (especially on HN) are so luddite about AI and OpenClaw, but Chinese, Indian, and Israeli developers both domestically and in the diaspora have been adopting this kind of tooling fairly rapidly.

And then people wonder why the center of gravity for a number of engineering subfields is slowly shifting.


> it might also be worth pointing out that for a game company there's some risk in that messaging

This.


> Tim Sweeneys quixotic quest to re-create the Steam store

Building a marketplace or AppStore isn't quixotic - it helps build distribution and gives Epic the power needed to drive studios to the Unreal Engine, though this strategy clearly went to the backburner due to Fortnite and it's entire ecosystem becoming the golden goose.

That said, Epic is also significantly more overstaffed than it's peers.


Total doesn't care.

They've funded massacres in 2021 [0] in order to unlock the $20 Billion Cabo Delgado LNG deal [1] in Mozambique. Greenwashing is the least of their worries.

I've found French business culture to be extremely refreshing compared to their DACH peers - French business norms tend to be much more pragmatic, and will try to maintain strategic autonomy by hook or by crook.

[0] - https://www.bbc.com/news/articles/c4gw119ynlxo

[1] - https://www.reuters.com/business/energy/mozambique-says-tota...


Total is also committed to expanding LNG - Total [0] and Oil India [1] are collaborating on a $20 Billion LNG extraction megaproject in Mozambique which was paused due to an Islamist insurgency during which Total-and-Oil India-funded paramilitary allegedly committed massacres against civilians [2] while putting down an Islamic State insurgency in Cabo Delgado.

The US, France+India, and China have been competing over this project for decades.

These are businesses - no one cares about morals, only interests. And it is in France's interest to unlock these kinds of LNG projects.

[0] - https://www.reuters.com/business/energy/mozambique-says-tota...

[1] - https://www.reuters.com/business/energy/oil-india-sees-resta...

[2] - https://www.bbc.com/news/articles/c4gw119ynlxo


> telling businesses to 'hack back' is inviting them to raise private armies

> That sort of thing does, however, to fit with the present administration's ideology

These kinds of firms (usually branded as boutique consultancies) have already existed in the OffSec space for over a decade now in most countries and with tacit approval of their law enforcement agencies.

It was BSides this weekend and RSAC right now so you will bump into plenty of them walking around Moscone.


That made sense when it was just businesses defending their own operations from criminals, akin to banks having to use armed guards to move cash and bullion around. But when it's businesses defending against state-sponsored actors in the context of an actual shooting war, that's very different.

> That made sense when it was just businesses defending their own operations from criminals, akin to banks having to use armed guards to move cash and bullion around.

That's a rather crude analogy which misses the major dangers of vigilante hacking. A better analogy is allowing private guards to shoot you on suspicion of you having stolen their money based only on a claim that the money found in your wallet might be theirs.

To understand the problem, think of vigilante justice where some person/group assumes the roles of police, judge and executioner, circumventing due process which is due for a reason.

What happens if a corp doesn't like what you have on your website, spoofs some logs as if coming from it and then hacks the site to disable your ability to communicate?

Well, in that case you're toast. You may go to the judge, pay lawyers and waste your life on lawsuits fighting against a corp with a lawful reason to hack you because if this becomes law, you will be guilty until proven innocent - that's very costly and hard to do. Your chances of successful will be virtually zero meaning the corps get a license to silence you with impunity.


Most APTs companies are already dealing with are either directly state-sponsored or state-permitted as has been seen with tr fairly common Cyrillic, Simplfied Chinese, and Hebrew keyboard checks that have become fairly common in offensive payloads, so the division you are making has been nonexistent for decades.

This is just a tacit admission of a practice that has been occurring under the radar for years now.


How is it "tacit"?

Anyway, it's actually bad if there's been a problem for years, and the way it becomes widely known is by Authority(TM) legitimizing it instead of trying to stamp it out.


> it instead of trying to stamp it out

How do you stamp it out?

Russia, China, India, Singapore, Israel, South Korea, and Japan don't cooperate on stamping out these kinds of operations. Even EU states likes Italy, Czechia, Poland, Hungary, and Greece have continued to allow these kinds of organizations to operate and proliferate capabilities, so much so that the European Parliament attempted an investigation that was promptly ignored by those states because "national security" falls under national sovereignty.

When it's morals versus national security, national security always wins, and no country will leave capabilities unused in the interest of maintaining a moral high-ground.

> the way it becomes widely known

It has been widely know in the security industry for years.


Devils advocate - this actually shows how promising TinyML and EdgeML capabilities are. SoCs comparable to the A19 Pro are highly likely to be commodified in the next 3-5 years in the same manner that SoCs comparable to the A13 already are.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: