Hacker Newsnew | past | comments | ask | show | jobs | submit | person3's commentslogin

This might be the most brain dead way to waste tokens yet.

I'm trying not to be negative, but would a human ever read any of the content? What value does it have?


It is challenging. In my CS degree grading for programming questions fell into two areas

1. Take home projects where we programmed solutions to big problems. 2. Tests where we had to write programs in the exam on paper during the test.

I think the take home projects are likely a lot harder to grade without AI being used. I'd be disappointed if schools have stopped doing the programming live during tests though. Being able to write a program in a time constrained environment is similar to interviewing, and requires knowledge of the language and being able to code algorithms. It also forces you to think through the program and detect if there will be bugs, without being able to actually run the program (great practice for debugging).


This scared me - I thought he died for a minute.


Death is usually permanent.


Ha! I think this is what he meant to say - "This scared me - I thought for a minute he died."


On the scale of minutes, not at all, no.

I've had half a dozen major surgeries and spent nearly 2 days on life support in 1994. I probably died for a minute or so more than once. Thanks to skilled surgery and anaesthesia teams, I came back.


For what it's worth, Dr. Knuth believes otherwise.


I learned from Dr. Knuth that you can put a tombstone on something, without burying it, and save yourself the hassle. It's very effective! Especially if you're just going to plow that area and move all the live ones eventually anyway.


I can type like 45 WPM on my phone keyboard right now. I'm definitely faster on a full size keyboard, but I'm not sure if I would be faster on this small keyboard. I know a lot of people wanted keyboard like this when the iPhone first came out, but now with a lot of practice using mobile keyboards, I'm not sure it's needed.

One of the main arguments for hardware keyboards was you could type without looking. I don't really look at my phone keyboard when typing, I roughly know the spacing of the letters. Plus auto correct is really good at this point, so when I do make mistakes the phone usually just corrects them.

The only use case I could see for this is if the keyboard had control/alt/esc keys - in that case shelling into a machine on my phone might become slightly more efficient than an onscreen keyboard.


I don't necessarily disagree with your comment, but you don't seem to address the chief virtue claimed by this product's marketing material:

> Free up your screen for content > Content First > Maximize your screen space for apps and content while you create with Clicks. > Clicks on. Screen size up. > Make more space for apps and content by moving the keyboard off your screen.

I have had the experience of the software keyboard making my screen feel claustrophobic from time to time. It has never been bad enough that I would consider reaching for something like Clicks, but it's certainly a problem I would rather never encounter if I had the choice.


I'm waiting for the day they remove the keyboard altogether and just have everyone use voice to text and eventually thought to text. I'd love to see that black mirror episode expanding on the meme of everyone sitting at a table texting each other at the same table but because of voice to text they've had this ingenious idea that they could just talk to each other instead!


And that is the last day I ever eat at a restaurant.


You won't eat at a restaurant where the people at the table talk to each other instead of texting each other?


I use the Unexpected Keyboard [1] on Android. I can use shortcuts like on a real keyboard and it is easy to get used to.

[1]: https://github.com/Julow/Unexpected-Keyboard


> 1. I want _nothing_ to do with user data. Nothing. Toxic nuclear waste. The idea of keeping the waste on hand needs strong justification.

Additionally, with regulations like CCPA in some jurisdictions, this isn't even optional anymore. At some point you will need to hard delete user data.


I can forget that you were my customer, but I can’t forget that there was a customer. That quickly turns into tax evasion, for one thing.

Much of the user information we acquire is the result of greed, nosiness, or laziness. If deleting users is difficult for you, that’s an architectural problem that has next to nothing to do with my comment.

The world is absolutely full of rules that have exactly one exception. If they have two we apply the Rule of Three and either fix it or change it back to two. I have absolutely no qualms about treating user data as the exception here.

If you’re Amazon, you don’t even need much of the PII until checkout time. Collecting or looking up that data up early is a security risk. Checkouts are going to be orders of magnitude fewer operations than your browsing traffic. When the order of magnitude changes, the solutions often change. And lastly, checkouts are when you make money. Expensive operations, like inserting into a table with fragmentation problems, are much easier to justify when they are attached to revenue events.

An ad campaign that falls flat can bankrupt you. A fire or earthquake can bankrupt you. A fancy and unusable site relaunch can bankrupt you. Spending a little money at the point of sale cannot.


I don’t know a lot about how it works, so forgive me if this is a silly idea. I wonder if attestation could be done using real Apple devices, while leaving the private key on the user’s android. So similar to the old beeper to get the signed attestation, and send the result to the phone. Still could be secure since you can keep the private key used to encrypt messages local on the users device. I guess the issue might be a cat and mouse game if detecting beepers flock of Apple hardware to try and disable them all… (given many people would be using the same Apple devices)


I think iMessage is still using older attestations, but generally an attestation of this sort (App Attest, Play Protect API) represent a chain of the hardware, boot process, OS and application.

So iMessage is not going to be willing to hand out private keys or negotiate them for a third party application, and Beeper will not be trusted to register a private key itself.

Android iMessage support would be weird because there is no iMessage application - there is an application which lets you send SMS and to upgrade to MMS or iMessage when available. So, if there ever was an official Messages app for Android, I would somewhat expect it to also offer to take over being the default application for SMS/MMS.


This is new to me, so I might be wrong, but I don't get why they share revenue with the creators of these GPTs. They are basically just prompts that consist of a few sentences. There's no value add, and the more ChatGPT improves the less prompting will be required. These GPTs feel closer to bookmarks than an actual program.


GPTs are apps. They are a prompt, files (up to 10 files, 200 MB each, automatically indexed into a vector DB) a Linux VM that can run code based on prompts or code you attach, can call any web API—-I made a gmail one for myself—and have access to GPT-4V, DALL-E, and Bing Search. You can mash all that up in really creative ways, then press one button to publish and get a link you can share.

The VM can easily do things like image and audio processing, ffmpeg, generate Office docs, etc.

You don’t have to run a server or pay any operating costs for them, just the $20/month ChatGPT Plus subscription.


What does your “Gmail” GPT do and what do you use it for?


I haven’t done much with it yet but it’s fun to play with.

So far one thing that was nice is “look up the tracking number for the thing I just bought”

I’ll probably play around with Bard’s gmail integration to find more use cases.


Is that the sort of thing they would actually let you publish? Seems like inexperienced users might wind up really screwing up their inbox.


You pass an open api spec on creation. You can remove all methods you fear may be risky, and leave it enough so that he can read your emails or calendar, if you feel comfortable with that


OpenAI would probably let you publish it (they have no review process today) but Google probably wouldn’t.

I don’t care though because I’m making these for myself.


mind sharing the code? just for curiosity. keen to improve my email productivity.


The crazy thing is there is no code! The instructions are just “you are a helpful email assistant. You search the user’s gmail in response to their questions” and you just paste in the OpenAPI spec and OAuth details for gmail into the GPT maker form. I asked GPT-4 to write the OpenAPI spec for the gmail API’s necessary to search my inbox.


ahhh i assumed there was some custom coding required for the OpenAPI spec


How did you give it access to your inbox? Does it use the Code Interpreter with the Gmail API, and ask for your credentials, or something?


GPTs support OAuth and tokens, it just asks you to sign in the first time you use it.


Why do you need gpt to find the tracking number? Gmail has pulled out tracking numbers in emails for years.


You can use it as part of a larger query. Like “look up my tracking numbers and render them in a table with delivery date and current location” or “plot them on a map” or whatever.


What does your integration with gmail do?


> They are basically just prompts that consist of a few sentences. There's no value add, and the more ChatGPT improves the less prompting will be required. These GPTs feel closer to bookmarks than an actual program.

So basically every ChatGPT wrapper startup


Exactly why the existential panic those startups experienced when this was announced


Not all GPTs are just prompts. You can create GPTs that include uploaded content or code.

A GPT I created for my own use invokes a Python function to do something GPT-4 cannot do itself.

Other GPTs include knowledge bases.


Can you share your use case with the Python function?


The first line of my prompt is:

  Create a diagram in mermaid syntax based on what the user asked. Pass the code to the create_mermaid_link function below to get a link to Mermaid Live. Display the clickable link to the user.
The prompt has more detail that tells it what types of diagrams to prefer, what types of escaping to use, how to order lines etc.

The file I uploaded contains the create_mermaid_link() function, which relies on being able to base64 encode a string.


These people are able to find customers that OpenAI might not have found and bring that revenue to OpenAI. Looking at the ads I see for these products on twitter, I suspect the average user of these products is so non-technical as to be unaware that they could just use ChatGPT.


Whether something is worth paying money for isn’t just about how technically difficult it is. Time is money. If it takes me a day to figure out how to configure a thing, that’s time I could’ve spent on something else.

As for why OpenAI would pay people to create them, it’s simple: expand the ecosystem.


For a counterexample, see Cauldron in the above repo. It looks like a lot of work and refinement went into that.


GPTs have:

1. Custom prompts 2. Knowledge 3. Actions

You are talking about only 1) here.


> there’s no value add

That’s for the customer to decide. They increase overall revenue, that’s why revenue is being shared.


There's a huge opportunity here for a rugpull - Give people a small revenue-share to do your market research for you, swoop in with your own app and better tuned model when you figure out what applications are possible.


That's like arguing there's "no value" in a business that sells pizza.

I mean, anyone can just make their own pizza. Where's the value in someone else doing it for you?



Webkit isn't Safari, though it's close. A lot of integrations and some APIs Apple provides aren't available in Webkit.


This won't even work to solve the problem they're trying to solve. If I'm a scraper or someone that wants to drive fake ad impressions, what stops me from faking the attestation info? There's some mention in the original article about the attester validating the attestation data is signed on the client, but that just pushes the problem down the stack a bit. Someone could still spin up VMs, and just automate the scraping in a real environment that passes attestation. The author is claiming this will ensure only humans are viewing said data, but it doesn't really ensure that, it only adds a couple steps.

I also find it funny that the authors point to mobile platforms as an example of how this will work well. Last time I worked with ad tech, mobile ads were flooded with fake impressions, and I highly doubt that has changed. The funny thing about players like Google is that they want to be able to tell advertisers they're doing a lot to prevent fake impressions to get them to buy ads, but they don't really want to solve the problem because it would cost them a lot of money. So they kinda play the line and develop tech like this that sounds fancy but doesn't actually stop the problem in practice.


I failed to learn how this exactly works, but you're looking for the term 'remote attestation'. This aims to prove that your computer is only running the approved software by having the TPM look into the computer's memory, hash the running software and its configuration and signing the hash with a unique private key burned into the TPM that is impossible to extract without physically invading the chip.


It's a bit odd though. What if you built your own ML model and trained it over a set of data that your wrote yourself. Would the work generated by the AI based of your prompts be not be copyrightable?

The original copyright laws were thought up way before even cameras, and we're still trying to apply them today to generated AI. Why can't we just realize that the world is very different now, and just create new laws? Instead we keep trying to arbitrarily interpret the law in a biased way to try to fit our modern goals as best we can.


> The original copyright laws were thought up way before even cameras, and we're still trying to apply them today to generated AI

but the original laws worked well with cameras didnt it?

The legal idea, that unless a human had creative input, it won't have copyright, doesn't fall afoul of ai generated content. There's nuance of course - what counts as creative input etc.

Of course, a new paradigm is possible with the advent of AI, but it would make copyright _looser_, rather than tighter, imho (and it would be to the progress of the arts and science to do so). But i don't see why it is fundamentally needed.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: