Hacker Newsnew | past | comments | ask | show | jobs | submit | beau's commentslogin


Such weird Apps. Typical Meta.

The iOS app screenshots are all about Wayfarer AR glasses. Nothing about Meta.AI.


The app id (com.facebook.stella) was for an app called Meta View. It looks like they just rebranded an app that was previously used for Smart glasses and is now using it as their official AI app.

Why rebrand rather than create a new app? Who knows. Maybe to demonstrate a high(ish) download count at launch, maybe for app/play store credibility.


They rebranded the app designed exclusively for that (Meta View) and added the chat features.


Yea but why? Honest question. It has a totally different purpose


Users that have the old app will get the new updated one installed.

Also probably for App Store SEO/rankings.

Meta is known for scummy behaviour, as we all know…


Congrats Beau!


They are sent through an affiliate link to Go Daddy. We hope to answer these queries ourselves -- soon.


Unfortunately, emoji domains are just not that well-supported at many registries: https://en.wikipedia.org/wiki/Emoji_domain


This should not happen for .com names (where we do a live "double check"), but can happen with others. We build our index based on zone files and DNS, so if a name is not configured, it will show as available.


I had it happen for lots of .page, .dev, etc. Really cool utility though. :)


Making this perfect is on the TODO, thanks!


That sounds like Go Daddy's annual price for .ai domains.


Hi! I'm the CEO of Instant Domain Search. I originally built it in 2005 after attending YC's first Startup School, and have maintained it as a side project since then. AMA!


I'm a long time happy user. The site is fast, simple, and always seemed trustworthy in a market with seedy actors. You've helped me name multiple projects and companies - thank you!


I concur - it's really snappy and a great place to start if you're looking for a domain. Although it's not 100% accurate, I understand the technical limitations and frankly, I can't think of a better alternative.


Thank you!


Been a happy customer since 2005! Your profile says "W22" next to the company name, is that a reference to a YC batch? What are your plans for the site?

Tangent: You know How everybody has to keep explaining "web3" over and over again? Twitter threads, deep dives, YT explainers, etc? Nobody had to spend much time explaining "web 2.0". You just saw Google Maps after MapQuest or InstantDomainSearch after whatever else was there, and you instantly got it.


Shh, we are only two weeks into W22: https://instantdomains.com/


I could be wrong about this but I believe you wrote a nice long article explaining how the search works and how you're able to search millions of rows instantly. Had some diagrams as well. It was quite an interesting read that I still have some faint recollection to this day. Can't seem to find that link anymore. Would you mind sharing it with us?

P.s. sorry if I'm totally wrong about this could be from some other site for all I remember now


It’s possible you read it on our site. We have some recent (and some very old) articles about how the magic happens, like this one about our open source word segmenter: https://instantdomainsearch.com/engineering/instant-word-seg...


What do you think needs to happen to make personal domains easy enough for the average person to buy and use, to increase data ownership?


Most registrars expose people to way too much complexity. The Internet is the original social network, but no one has made it easy to use. I think there is a lot of opportunity here.


I agree. What do you think of my approach here?

https://takingnames.io/blog/introducing-takingnames-io


So no domain squatting in response to a first phase of search here?


No, that'd be bad for business.


How is this different from literally dozens of other sites that do exactly the same thing?

Do a whois search, maybe cache the results a bit. Have affiliate agreements with some vendors. And scrape domain squatter prices.

Even a lot of registrars do this.

TLD-list (among others) does everything you do, better.


None of those sites existed when I first built this 17 years ago. WHOIS, even behind a cache, would not support our query volume. We focus on being really fast. Any specific features you'd like to see?


I’m on mobile. Any way to return only “available” results? Eg only results where you can click buy and pay a normal registration price? (And not like $100 to $1M)


Filters are on the TODO, thanks!


Honestly, you don't want to query a third party database. If you have a really good idea for a domain name, its already taken. OK, I'll retry: if you have a really good idea for a domain name, you don't want to share it with a third party. You only want to query the database(s) directly, and even then you want to select which ones. Why? Discretion. Say you want to start a new company. You look if the name you came up with is available. You verify it isn't, and hey presto a few days later it got domain squatted.


Why don't you just buy it straight away? Problem solved!


(Nobody ever thought of that!)

Because 1) you don't expect this to happen (confidentiality) 2) you're still setting up your company/idea which requires other administrative tasks which require time.

They want you to impulsively buy a domain out of fear of it getting hijacked.


I still don't see why not, it takes a few clicks and costs a few dollars. It seems that if you are a person that expects this to happen then it's a good strategy.

Otherwise you just decide which domain search you trust and hope your domain remains available.


Sorry, this page had a useEffect/setState render loop. We are running react@experimental with concurrent mode, and missed the error. Rolling out a fix now. Thanks!


These results are less accurate than Google Translate. But they are far faster to get, and far less expensive to generate: https://cloud.google.com/translate/pricing — our goal is here is speed. We want to search through many possibilities as quickly as possible.

The word vectors have been aligned in multiple languages. Using an approximate nearest neighbor search we are able to find the nearest vector to the input in multiple languages very quickly.

To keep the example simple, we did not try to filter the data through hand-built language dictionaries. In fact, we simply drop words in other languages that also appear in the English .vec file. Words like "ciao" appear frequently enough in otherwise English sentences that the example code drops it from Italian, and so is not shown in the results:

% curl -s "https://dl.fbaipublicfiles.com/fasttext/vectors-aligned/wiki..." | grep -n ciao 50393:ciao 0.0120 ...

One improvement would be to filter out any words that do not appear in a hand-curated dictionary instead of filtering out words that already appear in English. We decided not to show how to do this because we'd already introduced a few concepts, like aligned word vectors, approximate nearest neighbour searches, and wanted to keep the example as simple as possible.


Thank you!


We've released the underlying Rust implementation here: https://github.com/InstantDomain/instant-distance with Python bindings at https://pypi.org/project/instant-distance — feedback welcome!


For Linux, in the Makefile change the copy command to

cp target/release/libinstant_distance.so instant-distance-py/test/instant_distance.so

and it works. Built and running. The main tree was MacOS only.

Here's resource consumption in a sample run.

Time: 4.49s, Memory: 1552 mb.

Single word. Three langs including en.


How did you figure this out? I've done lots of Linux software build troubleshooting as a result of using Gentoo, BuildRoot, and pacaur, but this doesn't ring any bells for a common issue


They probably tried it, it couldn't find a dynlib which is a Macos shared object file, opened the three line makefile, and then fixed it to copy a .so


Did you try spacy's most similar method? It's written in cython so is presumably quite fast as well. Thanks for the rust implementation though, I will most likely use this.


I’ve not much to say on the actual lib, it seems great! However, don’t feel compelled to put all your rust code into a single lib.rs. You can split your work into several files and use ‘pub use’ and ‘mod’ in lib.rs to re-export your functions & types into a public API of your choosing.

cargo check and format time might also slightly improve!


Funny, I often say the opposite. Don't feel compelled to split up your lib.rs. It's really refreshing to see a nice, compact library in one or two files. Much easier to follow, especially over "type per file". Of course, there are limits, but for a small lib like this, I personally would keep it in a single, or maybe two files.


I have a fair bit of experience writing Rust code and the current status is totally deliberate. I find module file sizes of about 400-800 lines of code optimal in terms of my ability to find things vs the unnecessary complexity of having to skip around files when changing something that touches an API boundary.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: