Hacker Newsnew | past | comments | ask | show | jobs | submit | BlakeSimpson's commentslogin

Over the past 30 days, the quality of chatGPT has dramatically dropped. I personally have started using the API exclusively as I don't see a quality drop.


I'm in the digital marketing space and I swear a new AI tool is being pushed on the daily. They are being ate up by so many people and also raising insane amounts of money. While I feel like 99% of the tools are absolute garbage, I believe that small business owners are feeling FOMO and wanting to stay involved.


The vast majority of people are only headline readers. They see the title and they go on insane tangents on the subject. Facebook is the worst by far.


Because people want it is the short answer.

The way these companies see it, if theres a demand and an opportunity to make money, they go for it. Paired with knowing the GOV will always bail them out, it's a no lose situation.


Yes, as long as they have an opportunity to make money, they will leverage it.


I feel like they are up against a new lawsuit daily. First all the class actions regarding fake prime sales and hard to cancel subscriptions, now this.

Although this one I believe is more of an attempted power play by the FTC, the others not so much.


This isn't something that anyone should be surprised about. They were basically handing out these "loans" like if they didn't they would get fired. The amount of people bragging about it at the time on social media was enough to know that the vast majority of these loans were going to be abused.

For the SBA loans, if the applicant didn't request over 25k and put up collateral, they will never see a single payment.

For the PPP loans, that money was gone with the intention of not getting it back.


It blows my mind that a lawyer would go through all the schooling and DEBT just to use ChatGPT and risk losing their BAR card.

Sure, I see plenty of uses for ChatGPT in their field, but to not even check the output before submitting it is what blows my mind.


It shouldn't be so mindblowing. People are lazy and dumb, as much so as they're allowed to be. We're a year away from this tech being completely mainstream to the point where we'll find out what happens when an entire society at once stops trying to think/work because there's a machine to do it for them now.


If they used ChatGPT to find real cases that's not lazy, that's smart. The dumb and lazy part was NOT CHECKING ON THE NEW TECHNOLOGY FOR SOMETHING YOU SENT TO A JUDGE. Sorry for the caps, but so, so stupid. IIRC they asked ChatGPT if it lied, and it said no..


And they doubled down on dumb and lazy when the judge first raised the flag that something was wrong with what they submitted.


This. We thought that it was bad enough when people used google to diagnose themselves. We're entering uncharted territory now.


Before it was handful of mad people with things like sovereign citizen and stupid instructions on Internet. But now it seems even trained folks will follow whatever something they have not verified says...

Also I wonder how well the training data was cleaned from all potentially harmful stuff like for example the sovereign citizen ideas...


And I'm a French model..


And this is exactly why I don't think Communism will work.


>completely mainstream to the point where we'll find out what happens when an entire society at once stops trying to think/work

You really need not wait that far if you have been observant enough, they are they were at one point clownish enough to believe in religion, now the vast majority of them believe in 'science' etc. Over reliance on Chat GPT is just the one of the newest manifestation of laziness and dumbness.


They didn't know it could lie?

But also, pretty sure Doctors and Lawyers are among the least trusted/highest corruption professions. All that schooling is a barrier to entry to keep outsiders away, it doesnt guarantee competence. Basically once you get in, graduation rates are close to 100%.


You’re very wrong about the perception of doctors. According to polls, doctors are the 2nd most trusted profession. https://news.gallup.com/poll/388649/military-brass-judges-am...

My wife runs the resident eduction for her division and they are very aware of the dangers of sending an incompetent doctor out into the world to practice. They’ll force them through extra training and make them repeat residency if they have to.

Residents also drop out of their programs and move into other ones. It’s rare for them to drop out completely because they’d still be on the hook for $200k+ in loans. But medicine is a big field and it’s rare that someone who made it into and through med school to not be able to find something they are competent and capable at. Maybe you’re not great with patients, but you can look through a microscope all day as a pathologist etc…

Incompetent and corrupt doctors still make it through the process of course, but far far fewer than in our profession and just about any other profession I can think of.

As for lawyers:

Law schools have around a 77% graduation rate nation wide. And about 90% of law school graduates eventually pass the bar exam.

So we’re look at somewhere around 70% of people making it through after a pretty selective filter to begin with.


> According to polls, doctors are the 2nd most trusted profession.

n=1 anecdote here:

When I was in research track biochem and bumped into premed students, nearly the entire population of premed didn't seem to care about the science behind what they were learning. They formed a very different social clique. The "top students" in that set were sharing previous years' organic chemistry exams and none of them would read papers or get involved in research. They were entirely disinterested in the science.

In my interactions with general practitioners, I like to talk about medicines' method of actions, pharmacology, the actual biochemistry behind diseases. Most of them seem to have no clue or retention of this information. I'm not trying to challenge them, either -- I'm generally curious to learn.

That's not to say all doctors are like that. A lot of the surgeons and specialists I know are walking tomes of information.


This seems analogous to a programmer who knows all the sort functions offered by their language of choice and when to use each, but has never learned how to write their own sort algorithm, and therefore can't hold a conversation about doing so. Plenty of employers and customers would be thrilled with such a programmer, if the job at hand is just to ensure that the data gets properly sorted.


I think a lot of your general practitioners are bound to follow protocol set forth by various medical organizations. So in that sense a lot of diving into research won’t help them. They need to understand and diagnose without having to worry about every little research paper. A lot of research papers aren’t truth - they are peer reviewed opinions.

Once you get to medical research groups that formulate those protocols, that’s where research becomes important.


>I think a lot of your general practitioners are bound to follow protocol set forth by various medical organizations. So in that sense a lot of diving into research won’t help them. They need to understand and diagnose without having to worry about every little research paper. A lot of research papers aren’t truth - they are peer reviewed opinions.

Isnt this even more necessary for them to learn the fundementals? Especially if Physicians claim that they are both Art and Science, which is why we can't merely use science for diagnosis and treatment.


That’s why diving into research will not help general practitioners. Because research isn’t fact. You need a body of research along with oversight of statistical methods and its data collection to claim fact and that too with specific constraints.

So they do learn fundamentals but fundamentals do not come out of research papers.


Something like 17% of premed students end up actually becoming physicians.

That’s a very different sample. Then you have to consider that the people that you are most likely to notice are the most vocal self identified “premed” students since most colleges don’t actually have premed degrees.

As far as interactions with doctors. GPs are the least likely to get into specifics because of the generalist nature of their work. Expecting them to remember something that they haven’t used in 10+ years is asking a lot.

Understanding calculus is incredibly useful to software engineers. But try asking a practicing software engineer 10 years out of school some textbook calculus questions.


This is because medicine is so complex and usually doctors are in the business of caring for patients, and while they may enjoy science, it’s not like every single detail of medicine, which is a very broad field, will be interesting to them. At some point, the MoA of a drug just becomes another fact to memorize for board exams. clinical application is far more based on experience and familiarity compared to abstract theoretical knowledge, even though this forms the base for medicine.


I don’t really know, but it might be okay for them to not focus on that knowledge.


Seems like we could train a lot of people for a lot cheaper if that was the case, though.


That’s essentially what Physicians Assistants and Nurse Practitioners are.


From country where medschool is done by entrance exams the whole "premed" always sounded extremely weird and wasteful. They already have longest training time and then more is added on top?


In theory, the point of having medical students have an undergrad degree is that they will be more "well rounded". Unlike in many countries where university education is very focused, in the US, an undergraduate even studying STEM is required to take a certain number of courses like literature or history, and likewise someone studying literature or history is required to take some science courses. Of course just requiring people to take courses doesn't mean they will actually retain the knowledge after the tests, but it's a nice idea anyway.


Med school in those countries tends to last a year or 2 longer.

Also most other countries have national education programs for secondary school. High school in the US is regulated at the state and local level, so it’s very variable. As a consequence, the first 2 years of college tends towards general and education.

Another note, there’s also a med school entrance exam in the US.


Sure but in the US it is 4 (undergrad) + 4 (med school) so in total it is 8 years rather than 5 or 6.

Agreed regarding the US high school standards but students that take AP courses in high school can take the MCAT. They may not do the “best” but they will know some of the material. Schools in the US like UMKC have 6 year programs were high school students take the MCAT.


In a lot of ways, the first two years of US undergrad are "remedial high school". This also explains how, e.g. American undergrads typically declare their majors until mid to late into their second years.


Studying for the test and taking easy classes, especially as premeds, has been a (likely accurate) stereotype of the medical profession since at least the 1970s & 80s.


I’m not sure how that’s any different than 90% of computer science students, or the average college student in general.

When I took non-required theoretical classes, I tended to be one of the only undergrads in the class


>You’re very wrong about the perception of doctors. According to polls, doctors are the 2nd most trusted profession. https://news.gallup.com/poll/388649/military-brass-judges-am...

https://www.opensecrets.org/federal-lobbying/top-spenders?cy...

They are the 4th largest briber in the US.


Fewer than 20% of physicians belong to the AMA, so that’s just an absurd statement. By that logic all old people are the 15th largest bribers.


I mean as much as politicians go after old peoples votes, you might be right.


Far be it from me to defend our system of lobbying the government, but there's a big difference between bribery and federal lobbying. Your phrasing poisons the well.


"Paying money to legislators and regulators with the intent of swaying their decisions to suit your interests".

Lobbying is legal because we live in a society that places such a high value on money that we've accepted bribery as a means to increase its power.


No.

Finding a difference between bribery and federal lobbying with money, is the same outcome.

Money goes from person who wants, to person with power who receives. Everything else is theatre.


In the case of lobbying you can go look on a website and see who took money from whom when you’re choosing who to vote for.

The money that comes from lobbying is also limited and pays for reelection, the candidate doesn’t get to use it for personal expenses.

There are clearly problems with the system, but yelling no! it’s exactly the same as taking bribes isn’t helpful.


Everyone is piling onto you here, but another aspect is that many lawyers already don't do their own research, they use paralegals to do it for them.

With that information I think it's less mind-blowing, they were simply assuming ChatGPT could do the work of a paralegal


> many lawyers already don't do their own research, they use paralegals to do it for them.

If you mean that some lawyers use paralegals to do word searches for potentially-relevant case law (i.e., published judicial opinions as precedents), that might be true. But in my experience, good lawyers either do the raw research themselves, or at a minimum they use junior lawyers to do the initial searching for precedents; then if necessary, they do supplemental searching themselves, based on their greater experience.


I've worked with legal software professionally a few times, and kept thinking software won't replace lawyers, it'll replace paralegals. Not all, but a great deal of grunt work could be replaced with AI and good forms. Now if we could get them not to lie?


There still needs to be a human manually proofreading them, even if it's as trivial as looking up whether the cited cases actually exist. Same goes for code outputs from currently available AI tools: some still needs to debug them to make sure it works, and fix any troubles that spring up during deployment and production.

Of course after some number of iterations, AI may gain the ability to self-check and self-correct to the point of achieving 100% accuracy, but I think that milestone will quickly lead to AGI and job security will quickly become an obsolete topic for every industry anyways.


Maybe. But I keep thinking it's not going to replace the single paralegal that a single lawyer has, but it might reduce the 9 paralegals to 6, and then 4, and then..


That's a fair point, but AI tools could also unlock higher potential for them to take on more workloads. Perhaps by maintaining the same number of paralegals or even hiring more, they could take on an exponentially higher number of cases than before. There's infinite room for new litigation as long as human society persists.


Well, then either it will empower the "little guy" to do more (more your example) , or empower big business to capture more big business (more my example). Wanna guess which one I'm betting on?


> Now if we could get them not to lie?

This reminds me. Whatever happened to IBM’s Watson? It’s not a LLM right? But it seem similar enough function wise.


Watson wasn't just one thing, it was a marketing term to combine all IBM's efforts to do early AI like efforts. They didn't have any real success in making a product out of anything and stopped marketing 'Watson'


Do we even know why AIs "hallucinate"? Is it possible to prevent it?


> Do we even know why AIs "hallucinate"?

It's because, just like human memory, they aren't databases or search engines. Generative transformer models are basically next-word-prediction machines on steroids. They take the input and try to "guess" the most likely reply based on their training data.

These machines have no way to distinguish facts from fiction, only probabilities of combinations of words that would make the most plausible reply.

> Is it possible to prevent it?

There are methods to prevent this by incorporating specialised knowledge databases into the training material of these models. This, however, only works with models that have been finetuned on very specific tasks and topics [1].

Other approaches use AI to transform human questions ("bag of words" inputs) to queries into structured knowledge bases, match the results (e.g. tree-like structures of context and facts) to the question and turn them back into human language [2]. The downside of these methods is that they're currently limited to simple QA formats and won't feel as "natural" as talking to chatbot and requires carefully prepared and curated knowledge databases.

[1] http://jens-lehmann.org/files/2019/iswc_bert_simple_question...

[2] https://arxiv.org/abs/2303.13284


I think it's because AIs aren't actually in touch with reality as we know it, they're generating text from a sort of graph of concepts that comes out of the training process.


> risk losing their BAR card.

Did they risk it? Even after it being obvious the lawyer was lying and trying to cover up, and not acquiescing to their wrongdoing even after all of it, all they had to pay was $5k.


> all they had to pay was $5k.

They still risk being disbarred (not just hypothetically, it might actually happen) + taking a massive hit to the reputation (their own + the firm). The latter one alone is a strong punishment, given how widely-publicized this incident was in the media. It cost the firm a lot of potential clients, and the lawyer might struggle with finding a decent position after the fact (given how much more reputation-based the legal field seems to be compared to something like engineering).


Name change in 3..2..1..


It’s all good, man.


Do we know for sure that absolutely no further action is being taken? I really don't know processes, so I dunno if this is it or if this is just one of the consequences, and that the bar association (is that the right org?) they're part of would want to weigh in on it.


Well, one thing is that "law" was often the "field-of-choice" for the well-schooled who didn't know where to go next?

Now many, if not likely most, of the well-schooled are generally intelligent, but of course there are also quite a few rich idiots as well.

(Today, of course, most of the latter go into tech.)


I feel like this is becoming more and more common with not only Amazon, but other big companies like anything rideshare (Lyft & Uber), food/shopping delivery, and others. Whenever you try to bring up a problem that might cause bad publicity, they "proactively" start deactivating accounts.


After learning more on what these "impossible burgers" actually are, it was a no brainer that it was never going to be the answer. Looking forward to the development of this!


I'd say as long as the content that you're sharing is actual quality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: