Hacker Newsnew | past | comments | ask | show | jobs | submit | MaknMoreGtnLess's commentslogin

No go if the pennies are fiat. Has to be crypto, blockchain, blugh blag blah.

Even more points if it's a centralized blockchain like the ones a16z are funding.


> Ansible has a very low barrier to entry in terms of dependencies that need to be installed into the machines you want to manage: they need to have python 2.6+ or python 3.5+, and you need to be able to connect to the machines over ssh.

I understand what you're saying but do understand that to other people these can be concepts that they arn't familiar with or might actively struggle with

That said - were there any obstacles you faced when you started with Ansible?

What could have been better (tooling/docs/training/etc) to get you upto speed better/faster?


Shameless plug: https://hescaide.me/blog/2022/01/29/a-day-with-ansible/

This is after one day of interacting with Ansible. I did have previous experience with Linux and setting up Circle.ci projects and Dockerfile. There are not many concepts to grok. I just cloned a playbook on GitHub and tried to rewrite it to fit my needs, reading documentation over the part I did not understand well. No book or video tutorials were needed.


Much appreciated!


I want to learn from you more.

What if I gave you 2 docker containers:

1. one with ansible loaded and ready to go

2. another one with just SSH running that you can point the above to and experiment with

If you don't know docker - that's ok! It's easy and I will set you up to do that. let me know

This way, you don't have to mess around with either installing ansible or setting up VMsVPS/hosts blah blah


> Much of the job that ansible or other config management tools might have done is handled by docker build to bake container images, and k8s to manage custom running services

OK but there has to be something that executes the docker build and then kicks the jobs off - right?

What's that tool at the $dayjob?


> something that executes the docker build and then kicks the jobs off - right?

yep. there's build time and deploy time.

build time generally coincides with peer review: build happens because someone changes something, all changes are version controlled and get peer reviewed. so it's usually some combination of tools that can be integrated with and triggered by the code review tool. in prehistoric times the code review tool might trigger a CI workflow in jenkins which invokes a glue shell script to invoke specific build or test tools. these days the runner might be more tightly integrated into the code review system, such as a github or gitlab runner + using the runner's proprietary scripting language to call the same glue shell script to invoke the build tools.

At $day job the deploys that change real world state are coordinated and triggered by a meat layer of humans, they're not automatically triggered by commits getting merged, etc. A workflow automation system roughly equivalent to prehistoric jenkins is used to define manually triggered deployment jobs that are used to deploy to environments, those deploy jobs will typically invoke some kind of glue shell script that invokes terraform or kubectl or whatever, which will have the responsibility of coercing the environment into the desired state.


> why go through all the trouble of setting this up instead of just using AWS

1. Have you ever hosted something on AWS, for public consumption by parties not under your control, where you, personally were footing the bill?

2. Have you ever hosted something on AWS where a misconfiguration on your side and/or upset/impatient customer caused a bill so large that you had to shut the company down?

AWS will scale to fill your bank balance, irregardless of whether you can afford to pay for that scale. AWS is great for elasticity when you can afford to pay.

AWS has no functional bill limits, even after 10 years. For those who post a link to the AWS bill limits havn't actually handled any meaningful scale, at which point a set of dedicated servers is great value.


Would this be possible using SQLite:

- I point it to a bunch of JSON files on disk. They have similar schema but not exact (slight variations)

- I SQLite import them into one virtual table (I would rather not literally import them - think PostgreSQL JSON FDW)

- Then index certain fields so looking up records by certain fields is very fast (faster than having to "FTS" through all the files)

- Allow fuzzy text search on certain fields (say a field was company name and other fields were city, street and human names)

All the while (best case) not actually having to import the files into the DB (it's ok if the indices need to be rebuilt everytime)


There's a SQLite virtual table mechanism for reading from CSV files on disk but I haven't seen the equivalent for JSON. In any case provided you have the disk space importing JSON is fast and the single resulting .db file is easy to tolerate in my opinion.

Unless you're talking about 10+GBs of JSON I'd recommend importing them and seeing how far you get.

Once the JSON has been loaded into tables the other things you want to do should all be very feasible.

- Fuzzy text search can be done using SQLite trigram indexes https://www.sqlite.org/fts5.html#the_experimental_trigram_to...

- I'd split JSON columns that you want to index out into indexed regular columns - there are a bunch of tricks in sqlite-utils for doing that, see https://simonwillison.net/2021/Aug/6/sqlite-utils-convert/


Do you think unqlite would be a better fit?

Have you looked at unqlite?


Not really. I mean, yes, with unlimited effort you can but you'd have to write your own virtual table implementation. There is no existing JSON virtual table module. You'd have to do the indexing yourself in your virtual table implementation; it's not possible to add an SQLite index on a virtual table. If you imported the JSON data into SQLite tables you still can't do it, because SQLite doesn't support GIN indexes. You would have to extract the data from the JSON into separate columns and then index those columns. Now you've got an index, but you still can't do a fuzzy text search on it. You'd have to use FTS5, or create yet more columns with something like a Soundex of the original value. I don't think SQLite is a good fit here. PostgreSQL is the clear winner.


Do you think unqlite would be a better fit?

Have you looked at unqlite?


Yes! I tried this exact usecase using sqlite-utils as a python lib.

I basically made a python script that open each of the json file and insert it into a sqlite inmemory db using sqlite-utils insert.

Then you have a regular sqlite db (in memory) that you can work it!


Ha ha haha - EXACTLY to the last dot!

I am terrified of where we are headed as a society. I get that writing blogs are more time consuming than tweeting but for the consumer - is this content really valuable in the long term?


> I don't have a Twitter account, but I view posts and threads simply by opening Twitter links in a private browser window. Does that not work for you all?

No. After a while there's a huge modal layer asking me to log in


> Every day there are posts here with some Twitter thread as the source

These threads are extremely and overwhelmingly popular and that surprises me.

These threads always start off like "Here's how to make $100MM in 10 hours" and then multiple sub posts of most generic nonsense I've ever seen.

What's even interesting is people think they get tremendous value out of there and share/re-tweet and go crazy about them.

Am I really stupid or are most people on Twitter who engage with these threads on some kind of hallucinogen(s)?


While I am happy to go into TAM and related metrics, I am assuming this is your first successful startup.

In that case, don't worry about the market - infact, having just a few dozen paying customers who can be fit in 1 to max 3 personas but those that you understand very well, can reach well and get immediate feedback from is very important.

> I've conducted a small survey to identify a probable market for a problem before started building anything, and it turned out to have mixed results.

- How exactly did you conduct the small survey?

- What kind of questions did you ask?

- Did you use a form or conducted 1:1 interviews?

- How did you source the participants?

- What was the motivation of the participants?

> Like 55%-45% split

split of what?

Is your solution a whisper idea? Or can you share it?


Well it's not a ground breaking idea of any sort, it's about providing support for female reproductive health issues. Where I'm coming from female reproductive issues are considered too sensitive to talk in public and sometimes females find it difficult to talk with even with their doctors.

The survey was carried out by females in their reproductive age. And via a questionnaire mostly carried out via LinkedIn.

Well the question is not just for this idea but for any idea, how do you know if a problem is big enough to be solved.

And how do to proceed when there are mixed signals when talking to customers.


> Well the question is not just for this idea but for any idea, how do you know if a problem is big enough to be solved.

This is premature optimization. How many datapoints do you have so far?

Did you interview 50-60 of one persona yet?

There is no one answer because the detail completely lies in the specifics, which is why I asked you those questions. You seemed to have missed those questions.

> And how do to proceed when there are mixed signals when talking to customers.

There's no mixed signals. You're uncovering personas.

Each persona outputs a different but unique signal.

Segment them. Then try and resonate with 1 to max 3 personas depending on your mental bandwidth and resource ability.

By the way, how did you learn to do market research first before building a product? You're already ahead of the curve and very likely to be successful than most first time founders I've seen


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: