- https://dave-bot.com -> a full-stack AI platform where you can generate videos, images, music, code, 3d objects with frontier Gen AI models.
- https://headsnap.io -> a platform that you can generate images of yourself based on 4 selfies.
- https://quantiq.live -> a service providing financial and historical data for stocks, as well as government trades.
- https://aivestor.tech -> an AI agent that picks small/midcap stocks and trades them using Alpaca API. It uses Reddit, news, polymarket, Google Trends and many other data sources to take investment decisions.
- @Polyglot_lingua_bot -> a voice-enabled Telegram-based bot that can help you learn new languages.
- https://select.supply -> a directory of carefully-curated and well-crafted products.
All of those allowed me to quit my day job and live a comfortable and flexible life. I still invest time in maintenance and adding new features, but I love coding, marketing and everything that comes with promoting and selling a SaaS (and I also have a serious addiction for Stripe notifications).
On top of that, I developed my own software agency where I help clients build and scale software (https://bitheap.ch).
We handle this in Fireproof with a deterministic default algorithm, in addition to having a hash-based tamperproof ledger of changes. Fireproof is not SQL based, it is more like CouchDB or MongoDB, but with cryptographic integrity. Apache 2.0 https://use-fireproof.com
In practice during CouchDB's heyday, with lots of heavy users, the conflict management API almost never mattered, as most people can make do with deterministic merges.
I've worked in one of the top computing labs, with top GPU computing startups, have investor money from Nvidia, wrote CUDA for years, and hire people to do write GPU code. And would say, most people -- even Nvidia employees and our own -- are individually bad at writing good CUDA code: it takes a highly multi-skilled team working together to make anything more than demoware. Most people who say they can write CUDA, when you scratch a little bit of the items I put below, you realize they can only for some basic one-offs. Think some finance person running one job for a month, but not at the equivalent of a senior python/java/c++ developer doing whatever reliable backend code they're hired to do that lives on.
To give a feel, while at Berkeley, we had an award-winning grad student working on autotuning CUDA kernels and empirically figuring out what does / doesn't work well on some GPUs. Nvidia engineers would come to him to learn about how their hardware and code works together for surprisingly basic scenarios.
It's difficult to write great CUDA code because it needs to excel in multiple specializations at the same time:
* It's not just writing fast low-level code, but knowing which algorithmic code to do. So you or your code reviewer needs to be an expert at algorithms. Worse, those algorithms are both high-level, and unknown to most programmers, also specific to hardware models, think scenarios like NUMA-aware data parallel algorithms for irregular computations. The math is generally non-traditional too, e.g., esoteric matrix tricks to manipulate sparsity and numerical stability.
* You ideally will write for 1 or more generations of architectures. And each architecture changes all sorts of basic constants around memory/thread/etc counts at multiple layers of the architecture. If you're good, you also add some sort of autotuning & JIT layers around that to adjust for different generations, models, and inputs.
* This stuff needs to compose. Most folks are good at algorithms, software engineering, or performance... not all three at the same time. Doing this for parallel/concurrent code is one of the hardest areas of computer science. Ex: Maintaining determinism, thinking through memory life cycles, enabling async vs sync frameworks to call it, handling multitenancy, ... . In practice, resiliency in CUDA land is ~non-existent. Overall, while there are cool projects, the Rust etc revolution hasn't happened here yet, so systems & software engineering still feels like early unix & c++ vs what we know is possible.
* AI has made it even more interesting nowadays. The types of processing on GPUs are richer now, multi+many GPU is much more of a thing, and disk IO as well. For big national lab and genAI foundation model level work, you also have to think about many racks of GPUs, not just a few nodes. While there's more tooling, the problem space is harder.
This is very hard to build for. Our solution early on was figuring out how to raise the abstraction level so we didn't have to. In our case, we figured out how to write ~all our code as operations over dataframes that we compiled down to OpenCL/CUDA, and Nvidia thankfully picked that up with what became RAPIDS.AI. Maybe more familiar to the HN crowd, it's basically the precursor and GPU / high-performance / energy-efficient / low-latency version of what the duckdb folks recently began on the (easier) CPU side for columnar analytics.
It's hard to do all that kind of optimization, so IMO it's a bad idea for most AI/ML/etc teams to do it. At this point, it takes a company at the scale of Nvidia to properly invest in optimizing this kind of stack, and software developers should use higher-level abstractions, whether pytorch, rapids, or something else. Having lived building & using these systems for 15 years, and worked with most of the companies involved, I haven't put any of my investment dollars into AMD nor Intel due to the revolving door of poor software culture.
Chip startups also have funny hubris here, where they know they need to try, but end up having hardware people run the show and fail at it. I think it's a bit different this time around b/c many can focus just on AI inferencing, and that doesn't need as much what the above is about, at least for current generations.
Edit: If not obvious, much of our code that merits writing with CUDA in mind also merits reading research papers to understand the implications at these different levels. Imagine scheduling that into your agile sprint plan. How many people on your team regularly do that, and in multiple fields beyond whatever simple ICML pytorch layering remix happened last week?
Interestingly, a small company called Ogma already did something very similar back in 2021 (on an embedded system, no less). This (https://ogma.ai/2021/07/unsupervised-behavioral-learning-ubl...) is a description/video of how they got a small RC car to predict the next frame of its video feed given the action it was about to take, and thereby made the car navigate to a given location when fed with a still frame of that location (all this with online learning, and no backprop).
Instead of vicreg, they induced their latent state with sparse auto-encoding. Also they predicted in pixel, as opposed to latent, space. The white paper describing their tech is a little bit of a mess, but schematically, at least, the hierarchical architecture they describe bears a strong resemblance to the hierarchical JEPA models LeCunn outlined in his big paper from a few years ago. A notable difference, though, is that their thing is essentially a reflex agent, as opposed to possessing a planning/optimization loop.
It started as a side project to explore the latest AI trends.
Now it’s something we use daily — and others are starting to as well.
Thoughtcatcher is a lightweight, AI-powered notes + reminders app that acts like a memory companion.
It helps you:
- Capture raw thoughts and auto-tag them using AI
- Set smart reminders triggered by context and meaning - This was a game changer for me personally
- Search and chat with your notes like a conversation — not just by keywords, but by intent
Example?
You’re walking out of a meeting and think:
“We should revisit that pricing model after the new release.”
You jot it into ThoughtCatcher — no structure, no stress.
A week later, right before the next sprint planning, it reminds you.
Just when you would’ve forgotten — it remembers.
What started as a learning project has grown into something useful — not just for individuals, but for teams too.
We’re now exploring B2B use cases like:
• Project knowledge management
• Shared team notes with smart search and chat
• Meeting follow-up insights and reminders
• AI-powered team memory for client or product work
Want to try it out?
Android users: Download the app
iOS users: Use the PWA — just “Add to Home Screen”
Still early. Still learning.
But ThoughtCatcher already feels like something I wish I had years ago.
Would love your feedback or thoughts.
And if you’re building something similar— let’s connect
I've been in high tech for 30 years, and I've been laid off many times, most often from failed start ups. I _strongly_ disagree with a fully cynical response of working only to contract, leveraging job offers for raises, etc.
There are a few reasons for this, but the most concrete is that your behavior in this job has an impact on getting the next one. The author is correct that exemplary performance will not save you from being laid off, but when layoffs come your next job often comes from contacts that you built up from the current job, or jobs before. If people know you are a standout contributor then you will be hired quickly into desirable roles. If people think you are a hired gun who only does the bare minimum that next role will be harder to find.
On top of that, carrying around bitterness and cynicism is just bad for you. Pride in good work and pleasure in having an impact on customers and coworkers is good for you. Sometimes that means making dumb business decisions like sacrificing an evening to a company that doesn't care, but IMO that sort of thing is worth it now and then.
To be sure, don't give your heart away to a company (I did that exactly once, never again) because a company will never love you back. But your co-workers will.
"Each act, each occasion, is worse than the last, but only a little worse. You wait for the next and the next. You wait for one great shocking occasion, thinking that others, when such a shock comes, will join with you in resisting somehow. You don't want to act, or even talk, alone; you don't want to 'go out of your way to make trouble.' Why not?-Well, you are not in the habit of doing it. And it is not just fear, fear of standing alone, that restrains you; it is also genuine uncertainty. Uncertainty is a very important factor, and, instead of decreasing as time goes on, it grows. Outside, in the streets, in the general community, 'everyone' is happy. One hears no protest, and certainly sees none. You know, in France or Italy there would be slogans against the government painted on walls and fences; in Germany, outside the great cities, perhaps, there is not even this. In the university community, in your own community, you speak privately to your colleagues, some of whom certainly feel as you do; but what do they say? They say, 'It's not so bad' or 'You're seeing things' or 'You're an alarmist.'
"And you are an alarmist. You are saying that this must lead to this, and you can't prove it. These are the beginnings, yes; but how do you know for sure when you don't know the end, and how do you know, or even surmise, the end? On the one hand, your enemies, the law, the regime, the Party, intimidate you. On the other, your colleagues pooh-pooh you as pessimistic or even neurotic. You are left with your close friends, who are, naturally, people who have always thought as you have....
"But the one great shocking occasion, when tens or hundreds or thousands will join with you, never comes. That’s the difficulty. If the last and worst act of the whole regime had come immediately after the first and smallest, thousands, yes, millions would have been sufficiently shocked—if, let us say, the gassing of the Jews in ’43 had come immediately after the ‘German Firm’ stickers on the windows of non-Jewish shops in ’33. But of course this isn’t the way it happens. In between come all the hundreds of little steps, some of them imperceptible, each of them preparing you not to be shocked by the next. Step C is not so much worse than Step B, and, if you did not make a stand at Step B, why should you at Step C? And so on to Step D.
"And one day, too late, your principles, if you were ever sensible of them, all rush in upon you. The burden of self-deception has grown too heavy, and some minor incident, in my case my little boy, hardly more than a baby, saying ‘Jewish swine,’ collapses it all at once, and you see that everything, everything, has changed and changed completely under your nose. The world you live in—your nation, your people—is not the world you were born in at all. The forms are all there, all untouched, all reassuring, the houses, the shops, the jobs, the mealtimes, the visits, the concerts, the cinema, the holidays. But the spirit, which you never noticed because you made the lifelong mistake of identifying it with the forms, is changed. Now you live in a world of hate and fear, and the people who hate and fear do not even know it themselves; when everyone is transformed, no one is transformed. Now you live in a system which rules without responsibility even to God. The system itself could not have intended this in the beginning, but in order to sustain itself it was compelled to go all the way."
— Milton Sanford Mayer, They Thought They Were Free: The Germans 1933-45
The C library has many problems. First, many of the constants are provided as macros (completely valid for C) but this can become a problem in C++ which is generally migrating towards constexpr functions or objects instead of macros. Second, many of the macros specify de/serialization to bitfields and the bitfields raise compiler warnings about safety when compiled in C++ with `-Wall -Wextra`. That's on top of the library being far too complex. I understand the need for an XML generator, but as far as I understand, the C library and C++ library do not provide an easy way to dynamically specify message types at runtime instead of at compile time (contrast with other message protocols). The library's headers are sensitive to the order they're included and they provide configuration via declarations. One of the configuration items is "how many" global "channels" to allocate. These global channels are not thread safe and are used by, eg, QGroundControl.
The mavsdk library for C++ is a wrapper around the C library (for the most part) and it brings additional problems. Using the C library means that things aren't very type-safe under the hood. Mavsdk's use of C++ wants to be modern but makes several design choices that I disagree with, particularly around threading (instantiating the C++ library creates a thread to handle its own event queue and creating sockets via mavsdk C++ library creates additional one thread per socket) and serialization (the C++ code does not usefully provide type safe serialization. It's a common pattern but definitely not helpful on power-limited devices: power consumption and latency is higher compared to asynchronous socket programming, both of which are important for flight duration and control feedback. These design choices also make it difficult to unit test with Googletest's EXPECT_DEATH tests which uses `fork()` and can be sensitive to thread problems.
Auterion is writing a Mavlink library, libmav. I haven't yet looked at its internals, but I understand they want to address a lot of the shortcomings of the official mavsdk C++ library. So I have high hopes for that ... but alas already have things written to mavsdk's API.
Speaking of threading problems, I've encountered problems with QGroundControl's use of threads. A pattern common among these libraries is poor use of lifetime management, and poor use of shared_ptr and/or mutex to guard against races. It's the typical kind of thing that even experienced engineers make ... when they don't use tooling such as Thread Sanitizer or Address Sanitizer to warn (and raise the warning to an error) and/or insufficient test coverage.
Past the libraries, Mavlink protocol itself also leaves a lot to desire. It's a protocol that wants to be at nearly every layer in the OSI model. It smells of reinventing the wheel at all of those layers. At layer 2, there's the fact that mavlink is designed to be transmitted directly over a radio, telephone modem, a serial bus such as RS232, or even a multi-component bus such as I2C. At layer 3 we have device and component addressing, and forwarding. At layer 4 we have message sequences, checksums, and retransmissions. Then layer 5 is a little fuzzy. Even at layer 6 there is a custom file transfer protocol. It has custom implementations of a terminal session, !
It stinks to the core of a hobby protocol which "matured" by reinventing every wheel there could be. That's just my observation as a 20-year software engineer who entered the hardware/embedded field a few years ago.
Some components or implementations/firmwares have different interpretations of the meaning of data fields. If a coordinate is XYZ then what is the frame of reference for the XYZ? If you've got an orientation with yaw/pitch/roll, then what is the frame of reference? Is altitude based on AMSL or AGL or pressure instrumentation? Are we using WGS84 or something else? Even some messages themselves are deprecated, and there are some custom messages that vendors will use (remember: using custom messages requires a recompile of the library, with a customization to add the message definitions).
That's just what I have time to write off the top of my head. There's a ton more problems with mavlink, from an experienced software engineer's perspective.
I can do (almost) all of these things with a traditional Linux distribution, git, and a few scripts written in my language of choice, and I can do it faster.
Nix seems more competitive with something like Conda or Spack than a traditional package manager like Apt or Pacman. Similarly, NixOS seems more suitable for deployment in containerized environments built on woefully outdated deps than personal dev setups with generally higher security requirements.
I've experimented with both Nix and NixOS, and while I adore both the beautifully functional language and the endlessly helpful community, I haven't been able to justify the overhead of maintaining a nix.conf, flakes or no flakes, on top of everything else I need to do on my machines. I kept running into issues where some program I needed wasn't already integrated with the Nix way of doing things or something that worked on my Linux setup failed to work on my macOS setup despite multiplatform guarantees.
Nevertheless, I wait with baited breath for the day that Nix replaces Conda among a critical mass of practitioners in my field. I've tried getting a few of my teams to switch (with help from Devbox and Devenv), but it's a struggle to get people to learn new things when their (often worse) way of doing things works well enough most of the time. Heck, I count myself lucky for getting some people to switch from the atrociously slow Conda to the significantly faster Mamba as even that was inordinately difficult despite for latter being a drop-in replacement for the former.
Hi Chuck, I was the System / Proj manager on this. I had always wondered why folks didn't just simulate the whole thing in software; then I realized, although doing it with Ettus Research SDRs + moby networking + hella big Dell box to coordinate it all was necessary because --- the digital went out to the SDR, and out of the SDR came RF *at the frequencies of interest*. Although all of it was cabled RF (using cables I spec'd and acquired), it could have just as easily been put to antennas and run 'over the air'. (except of course that environment is harder to completely control).
The artificial interferers were able to be consistent across the entire test. The successful entries had no coordination, and still maintained spectacular bps throughput.
This work was done several years ago now, so I don't know if the winning algorithms are now deployed in an 'real' on-the-air system.
This was a very technically challenging project, pretty much at the limit at what could be done at that time. But it worked!
The key thing people miss about agile (and project management in general) is that you have to tune the process to the situation.
If you look at the PMBOK (Project Management Book Of Knowledge) it is really a comprehensive list of all the things you might have to think about while running a project. All of them are relevant to every project (e.g. hiring, communicating with the public) but some need a lot of attention for some projects and others don't.
I've worked on "A.I." projects where the sprint involves running a batch job that might take two days if it all works right. If it doesn't work right you might have to retry a few times.
When I was in charge of that batch job I would start it as soon as possible, sometimes with a week and a half to spare. Whenever somebody else was in charge of it they would start it with half a day or a day to run and we would blow the end of the sprint.
I blame the continuous stream of "urgent but not important" communicates generated by agile for that.
The timeboxing of planning is also a very bad idea. I worked at a place where we had a timebox of 2 hours for planning but really after that we were nowhere near a realistic plan for the sprint and it would take another 2 or 3 days of knock-down, drag-out meetings that would leave us all exhausted to understand what we had to do. After that the work was mostly downhill, at least the way I saw it, but one of the other team members would consistently burn the midnight oil at the end.
Often agile "teams" have a "normalization of deviance" situation where they have to do one thing (or say they are doing one thing) so they can say they are sticking to the process, but actually do something entirely different to get the job done.
This started out as an excuse to try writing C code safely. Eventually stumbled upon Frama-C, which is amazing.
But anyway, moochacha is a simple file encryption command based on libsodium's Argon2id, BLAKE2b, and ChaCha20-Poly1305. Nothing fancy, just symmetric encryption with a keyfile. It also supports optional padding to conceal file sizes.
After goofing around for months, I've just about reached the limit of my abilities, and am tempted to actually use it. Please talk some sense into me! See the README for a full specification. The UI could certainly be improved, but I'm mostly curious if there's anything unsafe about it.
Also, am I reinventing the wheel here? Similar tools either involve asymmetric algorithms or derive encryption keys directly from the password. I want the output to be safe even with a bad password, and even if quantum computing becomes absurdly successful.
For people who find this interesting I would highly recommend the book Emerald Mile [0]. Amazingly well written book about glen canyon, grand canyon, lake powell, glen canyon dam, dorys people, racing boats down the canyon and a near catastrophe in the dam building.
Also lets not forget the classic Desert Solitaire [1]. It too is well written and very interesting, in the book they take small inflatable boats and just float down Glen Canyon not long before it gets flooded.
Being able to narrow your focus to the important big things.
Building for the long term.
Everyone wants to do things quickly. That’s hyperscale hangover. while you can make a decent whiskey in just a year or two, the really good stuff requires more time.
Practice saying “no” to good ideas that excite you.
Here is how I used that book, starting with a solid foundation in linear algebra and calculus.
Learn statistics before moving on to more complex models (neural networks).
Start by learning ols and logistic regression, cold. Cold means you can implement these models from scratch using only numpy ("I do not understand what I cannot build"). Then try to understand regularization (lasso, ridge, elasticnet), where you will learn about the bias/variance tradeoff, cross-validation and feature selection. These topics are explained well in ESL.
For ols and logistic regression I found it helpful to strike a 50-50 balance between theory (derivations and problems) and practice (coding). For later topics (regularization etc) I found it helpful to tilt towards practice (20/80).
If some part of ESL is unclear, consult the statsmodels source code and docs (top preference) or scikit (second preference, I believe it has rather more boilerplate... "mixin" classes etc). Approach the code with curiosity. Ask questions like "why do they use np.linalg.pinv instead of np.linalg.inv?"
Spend a day or five really understanding covariance matrices and the singular value decomposition (and therefore PCA which will give you a good foundation for other more complicated dimension reduction techniques).
While not an AI expert, I feel this path has left me reasonably prepared to understand new developments in AI and to separate hype from reality (which was my principal objective). In certain cases I am even able to identify new developments that are useful in practical applications I actually encounter (mostly using better text embeddings).
As parents we wanted to be one of the better ones. A while back, we started with Bringing up Bébé, and since then learnt How to Talk so Kids will Listen and Listen so Kids will Talk; we were even Prepared while Reviving Ophelia. This year, we realized, we want to just settle on being A Good Enough Parent.
It really is amazing. Things it did in less than 10 seconds from hitting enter:
- opengl raytracer with compilation instructions for macos
- tictactoe in 3D
- bitorrent peer handshake in Go from a paragraph in the RFC
- http server in go with /user, /session, and /status endpoints from an english description
- protocol buffer product configuration from a paragraph english description
- pytorch script for classifying credit card transactions into expense accounts and instructions to import the output into quickbooks
- quota management API implemented as a bidirectional streaming grpc service
- pytorch neural network with a particular shape, number of input classes, output classes, activation function, etc.
- IO scheduler using token bucket rate limiting
- analyze the strengths/weaknesses of algorithms for 2 player zero sum games
- compare david hume and immanuel kant's thoughts on knowledge
- describe how critics received george orwell's work during his lifetime
- christmas present recommendations for a relative given a description of their interests
- poems about anything. love. cats. you name it.
Blown away by how well it can synthesize information and incorporate context
This suggestion is humorous, but absolutely true: Potty Training In 3 Days.
Before having children, I thought I was fairly empathetic and introspective, but raising a child helped me realize how superficial those traits in myself were.
I'm being completely honest when I say this book made me a better leader and project manager - having a better understanding of the motivations of others, incentivizing those looking to you for guidance based on their own goals/desires, providing those with tools they need to succeed, and taking a macro view of a problem and allowing those under me to flourish and find creative ways to solve problems that take advantage of their strengths and idiosyncrasies.
I'm in no way suggesting that you infantilize those around you, just that teaching my toddler to shit opened my eyes to the way I approached problems, and Brandi Brucks' book helped me approach things differently with great success!
Have you tried the --initial_prompt CLI arg? For my use, I put a bunch of industry jargon and names that are commonly misspelled in there and that fixes 1/3 to 1/2 of the errors.
I was initially going to use Azure Cognitive Services and train it on a small amount of test data, after Whisper released for free I use Whisper + openai GPT-3 trained to fix the transcription errors by 1) taking a sample of transcripts by Whisper 2) fixing the errors and 3) fine-tuning GPT-3 by using the unfixed transcriptions as the prompt and the corrected transcripts as the result text.
Whisper with the --initial_prompt containing industry jargon plus training GPT-3 to fix the transcription errors should be nearly as accurate as using a custom-trained model in Azure Cognitive Services but at 5-10% of the cost. Biggest downside is the amount of labor to set that up, and the snail's pace of Whisper transcriptions.
3blue1brown linear algebra + the linear algebra chapter from “all the math you missed but need to know for graduate school” - linear algebra and abstract vector spaces in general finally feel familiar.
Also, timbuktu manuscripts - showed a history that I had never really heard of. These are written manuscripts of african scholars which are hundreds of years old, and still exist today. Some record the history of great west african civilizations along with other things they studied (e.g. science, religion, math, literature, ect). I was never taught this history even existed but yet was made to learn the various details about asian, european, middle eastern, central/south american history. This, and the attempts to destroy/steal these manuscripts at various points in history, made it click how serious power of controlling information, and ultimately influencing beliefs can be, with respect to giving legitimacy to the various rulers/authorities. Beliefs/perceptions matter quite a lot. This 1hr lecture is quite good: https://m.youtube.com/watch?v=lQiqyyRfL2Y&t=16s
Fun question! I would suggest the following (in no particular order), subject to the proviso that you do need to be sat next to them to help manage frustration, especially in the beginning (although my personal take is that they shouldn’t be left to play by themselves at all at that age) - particularly as they learn the controller, general video game conventions, and the specifics of each game:
- Breath of the Wild
- Animal Crossing
- Stardew Valley
- Minecraft
- Super Mario Odyssey
- Super Mario 3D World
- Rayman Legends
- Ratchet & Clank
- It Takes Two
- Slay the Spire
- Journey
- Spiderman and Miles Morales
My son’s favourite superhero - far and away - is Spiderman, in large part thanks to the PlayStation games. Pretty great role model. Kids find swinging through the city utterly exhilarating.
It Takes Two was such a fantastic, memorable experience for both of us - he still talks about it months later. It does require quite a lot of a kid, though - better for when they’ve got a year’s experience.
And trying to catch all the insects and fish in Animal Crossing kicked off a passion in him for the real things, to say nothing of what it taught him about animals generally, time and seasonality.
A Nintendo Switch is probably a good place to start, although as he gets older I’m encouraging him to move more over to the PlayStation (partly because it’s so much cheaper over time!).
Switch Joycons are great for small hands, too, although most kids seem to be able to manipulate a full-size controller by age 4-5.
"You are in control of your data. Leon lives on your server"
Speech-to-Text: Google Cloud, IBM Watson, Coqui STT, Alibaba Cloud (coming soon), Microsoft Azure (coming soon)
So the AI assistant lives on my server, but if I want to have good quality speech recognition, everything I say is sent through a US cloud service. The only offline option, Coqui has a 7.5% word error rate [1] on LibriSpeech test clean, which is worse than Mozilla Deepspeech 2 from 2016 [2]. State of the art would be around 1.4% [3], meaning 81% less errors than Coqui.
>It's fun to think about how to potentially automate this kind of tracking
I download my entire history regularly and use https://yacy.net/ to index at all. It's essentially like a local search engine. Also works on the local file system and across machines.
Sailboat - My wife and I want a solution for our sailboat that allows for intelligent vocal control of ship systems. Quickly launching a series of activities with simple verbal commands or receiving verbal updates on conditions would be amazing. Easy things like automated voice capture to text for log books and maintenance would be helpful. "Oil level at 100%, Oil quality good, coolant level 100%, coolant quality good. Schedule oil change at 1500 engine hours." All of this must be done WITHOUT a constant internet connection.
- https://dave-bot.com -> a full-stack AI platform where you can generate videos, images, music, code, 3d objects with frontier Gen AI models.
- https://headsnap.io -> a platform that you can generate images of yourself based on 4 selfies.
- https://quantiq.live -> a service providing financial and historical data for stocks, as well as government trades.
- https://aivestor.tech -> an AI agent that picks small/midcap stocks and trades them using Alpaca API. It uses Reddit, news, polymarket, Google Trends and many other data sources to take investment decisions.
- @Polyglot_lingua_bot -> a voice-enabled Telegram-based bot that can help you learn new languages.
- https://select.supply -> a directory of carefully-curated and well-crafted products.
All of those allowed me to quit my day job and live a comfortable and flexible life. I still invest time in maintenance and adding new features, but I love coding, marketing and everything that comes with promoting and selling a SaaS (and I also have a serious addiction for Stripe notifications).
On top of that, I developed my own software agency where I help clients build and scale software (https://bitheap.ch).